Optical service management: enabling differentiated metro service offerings

Feb. 1, 2001

A survey of emerging technologies useful in enabling differentiated service offerings and decreasing network cost in metropolitan optical access networks.

James Scott, Gang Sun, and Lester Yung, Geyser Networks

Deregulation of the telecommunications industry has fostered a more intensely competitive landscape where both traditional and next-generation service providers must compete for revenue. This increased competition has given rise to new service models intended to allow providers to effectively differentiate themselves from their competitors while optimizing network cost. Observable trends in service capabilities include bandwidth on demand, subscriber-based provisioning, usage-based billing, and the outsourcing of both corporate and provider network infrastructure.

As standards bodies and systems vendors have been focusing on the convergence of narrowband and broadband network infrastructures and their associated next-generation interconnection models, such technologies, while necessary for service providers to achieve their service and cost objectives, no longer provide sufficient differentiation for either the service provider or the system vendor. As such, system vendors have provided new innovations ahead of the consensus-driven standards bodies, which enable new service capabilities while providing a graceful migration from legacy network architectures. These technologies address issues like application-specific quality of service (QoS), adaptive and efficient network utilization, service agility, and increased manageability.

Within the past few years, Internet Protocol (IP) has become pervasive in its use as the network-layer protocol of choice. More recently, the debate has continued regarding which link-layer protocol is more appropriate for transporting IP, with the question being whether to employ IP/ATM/SONET versus employing IP directly over SONET using packet over SONET (PoS).

The essential argument is whether the application-specific QoS provided by ATM outweighed its inherent management and implementation complexities and transport inefficiencies. Conversely, while PoS provides more efficient transport of IP and simplified management, it is unable to provide deterministic QoS in a scalable manner and unable to provide dynamic connection provisioning as can be accomplished using ATM-switched virtual circuits.

However, with the emergence of Multiprotocol Label Switching (MPLS) as an approach to building efficient, scalable IP networks capable of delivering deterministic QoS over native IP, the argument of IP/ATM versus PoS has become more of a historical artifact. In fact, a recently observable trend is that ATM is becoming less of a backbone transport technology, yielding to high-capacity connectionless IP/PoS and connection-oriented IP/MPLS/PoS pipes, and becoming more of an access technology used for aggregating DSL and voice traffic.

The persistence of ATM being used for these applications is largely due to the maturity of existing standards in these areas, specified by ATM-centric standards bodies. However, as standards emerge for handling these applications over MPLS and as carrier networks continue their migration toward unified control and data planes using IP, this final bastion of ATM and frame relay dominance in access networks will also yield to an MPLS-based approach for all but legacy network architectures.

Meanwhile, standards bodies have leveraged an IP-based optical control plane over the past year in an attempt to provide a variety of interconnection models that would reduce management complexity and increase service agility. This work has provided the ability to manage IP-enabled optical networks using one of several available interconnection models: the traditional overlay model; the interdomain model, which achieves scalability while still allowing service providers to manage the view of the optical-domain topology available to client IP networks; and the integrated model, where devices in the client IP networks and optical transport domain are routing peers.

By embracing an IP-based control plane, service providers are able to deploy scalable and maintainable networks by which sophisticated services such as distributed-connection provisioning and IP-based virtual private networks (VPNs) can be realized. Such solutions provide a highly manageable and cost-effective approach to delivering services for emerging business models based on such value propositions as outsourcing of applications and network infrastructure. Additionally, by increasing service agility, providers add inertia to the evolutionary trend toward an economically efficient bandwidth market.

Yet despite the growing ubiquity of IP, there are still several key legacy protocols that must be supported. Edge devices must include a robust set of multiservice interfaces of varying speeds. In addition to legacy link-layer protocols, advances in optical transport technologies such as DWDM threaten to render SONET and SDH as legacy transport protocols.

Whereas SONET and SDH continue on an evolutionary path with the multiplexing hierarchy growing to higher speeds, including OC-768 (40 Gbits/sec), DWDM provides the ability to transfer traffic over an increasingly dense number of wavelengths in a single fiber. However, DWDM has not yet been pervasively deployed within optical metropolitan access networks.

A recent forecast from Pioneer Consulting (Cambridge, MA) estimates that 98% of the $6.09 billion spent on optical metro solutions this year will be spent on SONET devices. This slowness to converge toward DWDM-based solutions in the metro is due to several factors:

  • Initial vendor offerings employing DWDM were little more than transparent long-haul solutions repackaged for smaller physical footprints. This resulted in solutions that were unable to compete with the price points of emerging cost-optimized SONET-based access devices.
  • While the value proposition of DWDM in long-haul transport networks is based on achieving higher fiber utilization, reduced fiber counts, and fewer regeneration devices, these benefits are less dramatic in metropolitan-area networks (MANs) that possess smaller network diameters than long-haul networks.
  • Initial DWDM metropolitan offerings did not address an essential problem-packet awareness and the ability to intelligently map and groom multiservice access traffic onto optical transport paths.
  • The slower-than-expected deployment of DSL by incumbent carriers not wanting to cannibalize their leased-line revenue has served to further extend the life of SONET/SDH in carrier access networks. Accordingly, it seems apparent that while DWDM ultimately will likely prevail as a compelling transport technology in metro-network architectures, the SONET and SDH market will clearly endure for some years to come, particularly as higher-speed SONET/SDH equipment becomes available.

A common optical architecture for service-provider networks is to employ SONET or SDH access rings, with DWDM being used for high-capacity long-haul transport. As such, an access device that provides a gateway between SONET/SDH rings and multiwavelength DWDM transport is certainly compelling. However, in many cases, service providers have existing DWDM equipment in their long-haul transport networks. This raises a fundamental question regarding whether next-generation metro optical solutions should provide integrated DWDM capabilities.

Clearly, the optimal approach, from a service-provider perspective, is for systems vendors to provide maximum flexibility in this regard. While those metro systems vendors that did not adequately anticipate this slow speed of convergence to DWDM in the metro may argue that systems should employ a "bottoms-up" approach-building on top of integrated DWDM optical architectures-it should be obvious that in many cases service providers actually prefer standalone SONET/SDH equipment for use in conjunction with previously deployed DWDM equipment. As such, a flexible approach that allows a device to provide standalone SONET/SDH or DWDM, as well as an integrated capability as a configurable option, addresses the largest cross section of service-provider needs.

One final cautionary note is to simply point out that many next-generation SONET and DWDM offerings employ nonstandard framing technologies. Service providers interested in deploying multivendor network architectures are well served to consider interoperability among devices at all layers of operation, including the physical layer.

A common objective of next-generation optical devices is to decrease the time required to provision circuits and to eliminate costly service-provider truck rolls. Traditionally, provisioning end-to-end paths through an optical transport network is both time-consuming and labor-intensive, where crossconnects are established piecemeal through the network, requiring numerous man-hours of work and resulting in connections being provisioned in time scales measured by weeks or even months. The new breed of equipment emerging from optical-systems vendors seeks to compress these time scales to seconds or minutes by providing control planes that leverage dynamic IP-based routing and distributed MPLS-based signaling.

Note that in this context, optical-edge devices will be responsible for integrated IP-routing functionality where both control- and data-plane traffic is forwarded based on previously computed paths calculated using a distributed routing algorithm. Conversely, optical crossconnects in the core transport domain will leverage IP signaling, routing, and forwarding strictly for control-plane traffic used to establish optical paths through the transport domain.

As a result, rather than network operators having to establish individual crossconnects to instantiate an end-to-end path, element management servers (EMS) or IP-based client devices can generate a signaling message to be distributed along a specified route to establish an end-to-end path. The realization of this approach will result in lower cost and more dynamic service capabilities.

In addition, by providing performance monitoring and collection of billing information with a granularity of per-packet, per-cell, and per-frame, the next generation of optical-edge devices enable new capabilities in the area of usage-based billing and service-level-agreement (SLA) verification. As the name implies, usage-based billing denotes service offerings in which subscribers are charged only for the network resources used. This information may be collected by billing systems on a per-user basis or on a per-user/per-application basis, depending on the level of sophistication of the ingress classification and data-collection capabilities of the edge device being used.

Similarly, this per-session data or data aggregates may be uploaded to a central repository such as an EMS or other type of server that can be queried remotely by subscribers to ascertain what their usage patterns are and whether their SLA (connectivity, peak and mean utilization, packet loss, etc.) is being met by the service provider. For services with end-to-end delay objectives, this may imply that the service provider will need to deploy a network-monitoring application that collects information regarding end-to-end delay performance for each link in the network.

A popular approach being embraced by emerging competitive local-exchange carriers is to deploy metro optical architectures that use Ethernet as the link-layer technology. This is done to benefit from the low cost of Gigabit Ethernet and capitalize on the inevitable arrival of 10-Gigabit Ethernet.

To support this trend, the latest offerings from optical access vendors provide the ability to deploy sophisticated VLAN or VPN technologies. These new capabilities typically rely either on Layer 2 virtual local-area-network (VLAN) protocols or Layer 3 IP-based VPN technology and provide the ability to support application service providers (ASPs), content service providers, outsourcing of corporate-network infrastructure, carriers' carrier services, etc.

The VLAN approach leverages traditional standards-based protocols defined in ANSI/IEEE 802.1Q, applied to the metropolitan area. As a result, service providers receive per-VLAN isolation in order to achieve a more stable Layer 2 topology and limit the negative effect of broadcast storms while paying a very low penalty in terms of management complexity.

VPN technology allows service providers to manage, through policy, precisely which IP routing information is exchanged among its client networks. This capability allows service providers to build corporate intranets, as well as multiclient extranets, while offloading the responsibilities of managing the corporate backbone from its clients. The dominant emerging standard is RFC 2547bis, which employs Internal Border Gateway Protocol to exchange routing information within IP subnetworks and External Border Gateway Protocol between subnetworks.

This approach defines customer edge (CE) devices that attach to provider edge (PE) devices. The CE will be a routing peer of the adjacent PE but will not be a routing peer of CE devices at other sites, even if they are members of the same VPN. Routers at different sites do not exchange routing information directly with each other but rely on the adjacent PE router to exchange information regarding the reachability of external IP addresses according to the imposed policy.

PE routers maintain multiple forwarding tables, mapping each table to the site with which the PE is connected. These forwarding tables are populated with entries that have at least one VPN in common with the site associated with that particular table, preventing communication between sites that have no common VPN. This property allows the additional benefit of enabling service providers to handle VPNs with overlapping address spaces unambiguously without the use of technologies such as Network Address Translation. It should be apparent how powerful this technology can be in enabling business models such as ASPs, content service providers, storage service providers, outsource corporate intranets, and carriers' carriers.

It should be pointed out that the authors make a clear distinction between a VPN and a secure VPN. Whereas a secure VPN will employ some form of datagram encryption, the VPN model discussed here provides a level of security equivalent to that of a Layer 2 (ATM, frame relay, etc.) connection. Use of the above VPN approach does not prohibit use of secure VPN technology. However, in this scenario, an attached secure VPN client device is solely responsible for handling the encryption functionality. For example, if a service provider were using the VPN approach described above but needed to support some clients that required secure VPNs, the CE device would be a secure VPN router running IPsec with tunnel-mode security associations and encapsulating security payload protocol in order to tunnel encrypted datagrams through the service-provider backbone.

Different applications require different levels of predictability and consistency with respect to delay and loss objectives. As such, the ability to classify traffic on a real-time basis and provide application-specific QoS on a per-datagram basis allows service providers to more significantly differentiate their service offerings.

When discussing QoS, there are two seminal models to consider: deterministic QoS versus relative QoS. The choice of which model is appropriate is largely a function of the specific needs of an application and the particular service being purchased by the customer.

Deterministic QoS refers to the ability to provide strict worst-case bounds on end-to-end delay, variance of delay, and loss. In general, the end-to-end QoS performance bounds are limited by the QoS performance of the weakest node in the end-to-end path. For an individual node to provide deterministic QoS, it must provide stateful forwarding (either connection-oriented or pinned routes), admission control, ingress metering and marking, segmentation and reassembly of large packets traversing the switch fabric, congestion management, and per-connection output scheduling.

A node may either participate in a distributed constraint-based routing protocol or employ a centralized approach using a route server possessing topology information and knowledge of link constraints (such as bandwidth availability) for all links within the routing domain. Node implementations that support all of the above functionalities are able to provide strict determinism with respect to QoS bounds with an ability to adapt to arbitrary connection-level loading patterns. This description denotes the basic set of QoS functionality used to specify ATM. More recently, the Internet Engineering Task Force has conscripted seminal components of this QoS description for defining MPLS in such a way that the protocols are capable of providing deterministic QoS when needed. Perhaps a caveat emptor is warranted: Not all MPLS implementations are created equal.

Relative QoS enables an ability to provide a level of service differentiation such that one grade of service may receive a better or worse QoS relative to another without the additional rigor required to provide bounded loss, delay, and variance of delay. With the maturation of the differentiated services (DiffServ) standards, a standards-based approach to an IP-based relative QoS model is available. DiffServ redefines the type of service field in the IP header whereby 6 bits are used to specify forwarding treatment and drop precedence to provide local management of congestion epochs. Encoded within this 6-bit DiffServ code point are three per-hop behaviors:

  • Expedited forwarding. Used to provide low-loss, low-delay, low-jitter connectivity as may be required to support a virtual leased-line service.
  • Assured forwarding. Defines four forwarding classes, each of which has up to three drop precedences.
  • Best effort. Forwarding treatment used for low-priority traffic, sometimes referred to as "send and pray."

A common misconception is that deterministic QoS can be achieved by simply over-engineering the capacity of the network. In fact, deterministic QoS can be achieved for very simple topologies when DiffServ forwarding is used in tandem with very conservative traffic-engineering assumptions, such as zero statistical multiplexing gain. However, it must be noted that as the service-provider network evolves toward more complex topologies, traffic-loading patterns may become skewed, resulting in congestion points or "hot spots" in the network. It is at this point in this model that deterministic QoS deconstructs to relative QoS in terms of actual performance with respect to loss, delay, and variance of delay.

Advances in optical transport technologies have dramatically increased the available capacity of the fiber used in long-haul and MANs. This increase in available capacity has served to exacerbate the bottleneck that traditionally occurs at aggregation points, where many low-speed access links (or rings) are served by a smaller number of higher-speed links. In the reverse direction, the objective is to flexibly groom traffic onto SONET/SDH time slots and DWDM wavelengths, such that the capacity of these light paths is efficiently utilized. The new breed of optical edge devices providing optical concentration and intelligent application-aware packet classification, switching, and routing should prove to be quite effective at addressing these issues.
Figure 1. Virtual concatenation typically provides a software-based approach to creation, deletion, and modification of SONET/SDH connections.

Traditional SONET and SDH, while providing robust physical-layer protection and subrate multiplexing capabilities, employ a rigid multiplexing hierarchy that is not well-suited to data-centric line rates. For example, a 100Base-T Ethernet interface (100 Mbits/sec) mapped to SONET would traditionally be mapped to an STS-3 (155 Mbits/sec), effectively wasting 55 Mbits/sec worth of link capacity. A number of emerging systems vendors have announced products that leverage SONET/SDH virtual concatenation in order to "right-size" the optical light path to the bandwidth of the data-centric connection.

Using the above example, the 100Base-T interface could be mapped to two SONET STS-1s (52 Mbits/sec), resulting in an increase of approximately 33% in transport efficiency. The American National Standards Institute (ANSI) T1X1.5 committee is currently working on a standardized approach to both high-order (STS-1 level) and low-order (VT1.5-level) virtual concatenation (see Figure 1). Vendors leveraging virtual concatenation typically provide a software-based approach to creation, deletion, and modification of these right-sized SONET/SDH connections. Such technology enables new services such as user-based subscription and bandwidth on demand, where a user's SLA may vary according to time-of-day.

However, rather than using software, that low-order virtual concatenation can be managed by a resilient hardware-based manager that executes a distributed dynamic-bandwidth-allocation protocol in the overhead of standard SONET/SDH frames. This method provides a highly scalable, highly adaptive, highly granular approach to managing bandwidth in a near mathematically optical fashion on SONET/SDH access rings.
Figure 2. Dynamic bandwidth allocation provides a highly scalable, highly adaptive, highly granular approach to managing bandwidth in a near-mathematically optical fashion on SONET/SDH access rings.

The adaptive nature of this approach circumvents the need for complex traffic engineering on access rings, as this process is managed at the physical layer by a hardware-resident distributed algorithm. Further, the speed at which the distributed dynamic-bandwidth algorithm can adapt and the tight coupling of Layer 3 signaling (used by applications to reserve service-provider capacity) to this physical-layer mechanism provide a solution that can adapt to truly arbitrary and rapidly modulating packet-level loading patterns.

When considering the potential effects of long-range dependent (as described by Paxson and Floyd, "Wide Area Traffic: The Failure of Poisson Modeling," IEEE/ACM Transactions on Networking, 3(3), pp. 226-244, June 1995) or "self-similar" traffic (where correlations exist across multiple time scales) on application performance, this adaptive solution becomes even more compelling. As such, service-provider SONET/SDH access rings leveraging this mechanism can be considered futureproofed against new and unforeseen application traffic that may adversely impact network loading, as was seen with the ubiquitous and near-instantaneous use of HTTP several years ago. Finally, this real-time approach to virtual concatenation not only allows new levels of efficiency in access-ring utilization to be achieved, but also enables new services. These include usage-based billing where users only pay according to the number of in-profile (guaranteed) or out-of-profile (best effort) SONET/SDH time slots actually used per time quanta. Examples of how this dynamic mechanism might enable new services can be found in the applications of voice over IP (VoIP) and video on demand (VoD).
Figure 3. A simplified voice-over-Internet Protocol (VoIP) architecture illustrates the on-net-to-on-net case and the simple on-net-to-off-net case of call setup/teardown.

Figure 3 depicts a simplified VoIP architecture, illustrating the on-net-to-on-net case and the simple on-net-to-off-net case of call setup/teardown. A two-phase commit protocol (reserve and commit phases) is employed in commercial-grade voice architectures to ensure resources are available before signaling the called party and to ensure usage recording and billing are not initiated until the called party picks up. After the first phase of call signaling, both clients have completed capabilities negotiation and determined which network resources are required to establish the call.

Address translation (from E.164 address to IP address) is performed by a call-management server or media gateway controller (in the off-net case) using standard-based signaling such as media gateway control protocol. Clients, which may be IP phones, multimedia terminal adapters, edge routers, or cable-modem termination shelves, exchange resource reservation protocol (RSVP) PATH and RESV messages in order for the endpoint and intermediate RSVP-capable nodes to reserve resources and maintain the appropriate call state. (Note: PATH and RESV are not acronyms but the names of RSVP message types.)

By coupling this Layer 3 RSVP signaling with the underlying physical-layer dynamic-bandwidth-allocation protocol, the optical edge device is able to ensure that the amount of ring capacity available for latency-sensitive traffic is just enough to handle the ambient load. Configurable tuning parameters allow the network operator to tune the sensitivity of this dynamic coupling mechanism.

Alternately, if the network architect prefers to employ a relative QoS model based on DiffServ, the configurable SLA associated with each optical tunnel could be configured based on traffic-engineering assumptions. Typically, this will involve Erlang calculations to estimate the voice load, coupled with some general assumptions regarding the source traffic model of data. Using the above Layer 3 mechanism, the ring-capacity utilization can be shown to be near mathematically optimal. Using the DiffServ/traffic-engineering approach, the capacity utilization is a function of the accuracy of the underlying traffic-engineering assumptions.

The hardware-based dynamic-bandwidth-allocation mechanism is also useful in enabling VoD architectures. In this case, MPEG transport packets are removed from a digital video broadcast (DVB)/MPEG asynchronous serial interface (ASI) serial link and mapped to virtually concatenated SONET/SDH time slots. This allows the amount of service-provider network capacity used to carry a 270-Mbit/sec DVB/MPEG ASI stream (CENELEC '97) to be reduced to a size that correlates to the actual number of active programs within the ASI stream, effectively removing the transport packets from inactive programs (where there is no useful data) prior to mapping those packets to SONET/SDH time slots.

Additional reduction of service-provider infrastructure cost is achieved when video statistical multiplexing capabilities are integrated into the transport equipment.

As the telecommunications market becomes increasingly competitive, service providers find themselves in a situation where they must keenly differentiate the services they provide in order to grow their revenue. Additionally, service providers are compelled to deploy converged networks with increased service agility in an attempt to decrease infrastructure and maintenance costs.

James Scott is a staff scientist, Gang Sun is a product manager, and Lester Yung is a senior sales engineer at Geyser Networks (Sunnyvale, CA).