Multi-service platforms with standards-based NPU and switch fabric devices

Dec. 12, 2003
By Richard Borgioli and Raffaele Noro, Vitesse Semiconductor--The convergence of voice, data, and video onto packet-switched networks is underway as new deployments and traditional communication networks become multi-service platform architectures.

A multi-service platform designed with network processing unit-based line cards and a switch fabric is used to transfer the media streams between the various network interfaces.

By Richard Borgioli and Raffaele Noro, Vitesse Semiconductor--The convergence of voice, data, and video onto packet-switched networks is underway as new deployments and traditional communication networks become multi-service platform architectures. Advances in network processor unit (NPU) and switch fabric devices have now made standards-based components available with the processing speed, quality of service (QoS), and programmability necessary to offer a viable alternative to an ASIC implementation for platforms of this type. Important areas of application for new standards-based NPU and switch fabrics include media gateways designed to transfer voice over IP and other packetized media flows between PSTN, mobile, core, and IP networks (see Figure 1). As the access points of a converged network use different protocols for transporting data and voice (i.e. ATM, IP, point-to-point protocol, and SONET), the task of the media gateway is to seamlessly transfer the media streams between the various network interfaces while supporting QoS guarantees.

Design of multi-service platforms
Multi-service platforms require a highly scalable architecture that includes integrated packet switches varying in size from a few ports in access networks to hundreds of ports in enterprise and metropolitan area networks. A multi-service platform normally contains a number of NPU-based line cards (typically one per interface port), a switch fabric, and a management and control unit for control plane operations. Physical layer devices connect to the NPU at both the source and destination ports. On ingress, the NPU segments incoming packet streams of various types (e.g. Gigabit Ethernet, OC-48, packet over SONET) into fabric compatible frames, or cells. The switch fabric transfers these cells to the appropriate egress NPU, which in turn reassembles the fabric cells into outgoing packet streams.

The multi-service platform must perform traffic management along with packet switching and system control. Traffic management is largely carried out by the NPUs, while the switch fabric provides the switching capability to move traffic at wirespeed from all sources to destination NPUs. The traffic management functions include classification, marking, metering, policing and scheduling. Packets received at the ingress NPU are classified based on the packet label or on some other packet attribute relating to source, destination, or protocol. Packets may be conforming or non-conforming to the existing traffic contract for the particular flow, with non-conforming packets marked or discarded by the ingress NPU.

The switch fabric must maintain the integrity of data and the classification performed by the NPU, as well as implement class-based handling of the different traffic flows. This is generally accomplished by allocating a minimum bandwidth to each class or by serving classes with a pre-determined priority. Cells received at the egress NPU from the switch fabric are reassembled into packets and stored in memory. A scheduler determines the transmission order of packets to the output port, and in certain applications re-shapes the traffic by rescheduling some packets ahead of others.

Standards-based implementations
The interface between the NPU and the switch fabric provides for the transfer of cells based on destination and service classification. Each cell consists of a header and a payload, with the header containing system information (i.e., destination, class, and congestion control), while the payload contains the data to be transferred through the fabric. Standard interfaces are defined by the transmission protocol contained in the cell header and by such physical characteristics as word size and clock frequency. These interface standards include the Common Switch Interface-Layer 1 (CSIX-L1), the System Packet Interface (SPI) and Network Forum Streaming Interface (NPSI), and the emerging standard for advanced switching architecture based on PCI-Express.

The CSIX-L1 standard specifies a format and a protocol for the exchange of information between NPUs and switch fabrics at the physical layer level. The cells exchanged across the CSIX interface are called CFrames. A CFrame consists of an 8-byte base header, an optional extension header, the payload, and a vertical parity word. The headers carry system information (type, class, destination, payload length, and flow control bits), while the payload contains the data to be transferred. The maximum length of the payload is typically determined by the size of the cell configured into the fabric (e.g., 64, 96, 128 bytes), at a value not to exceed an absolute maximum of 256 bytes, as specified by the standard.

Interface standards are important in achieving QoS levels specified by a service level agreement (SLA) between the traffic source and network provider on the various traffic flows to be managed and switched. In general, traffic flows can be mapped into one of three basic service categories:

Guaranteed delay. This service category is characterized by a traffic contract in which the traffic source commits to an average bit rate and a maximum burst size, while the network guarantees a maximum latency and minimum bandwidth. This category is generally reserved for voice traffic and other delay-sensitive traffic.

Guaranteed bandwidth. This service category is characterized by a traffic contract in which the traffic source commits to an average bit rate, while the network guarantees a minimum throughput. This category is generally used for loss-sensitive traffic, which would include virtual private networks and file transfers.

Best effort. This service category is characterized by no traffic contract and therefore no commitment on latency, or throughput. This category is generally left for the least delay-sensitive traffic such as Web access and e-mail.

Experimental results
As an example illustrating the QoS capability of a standards-based NPU/switch fabric, software tools were utilized to simulate the operation of the network processor interfaced to a switch fabric made up of a queue manager and crossbar switch. A DiffServ node was modeled in which the latency of the "integrated" NPU/switch fabric was evaluated for expedite forwarding, per hop behavior (guaranteed delay) as would be used in establishing a SLA for voice traffic.

The simulated DiffServ node consists of 16 OC-48 ports, with each port connected to a line card containing an ingress and egress network processor operating at 600 MHz interfaced to a fabric queue manager. The interface is via a 32-bit CSIX-L1 bus clocked at 125 MHz, to provide 4 Gbits/sec of CSIX bandwidth per port. The line card for each port is connected to a switch card containing four crossbar switches, with each switch connection made through high speed serial links operating at 155.52 MHz. This establishes the switch fabric connection speed at 2.5 Gbits/sec per serial link, to provide an aggregate switch bandwidth of 10 Gbits/sec per port.

The objective of the simulation was to determine the maximum latency that could be guaranteed for an expedite forwarding PHB SLA. A 1 Gbit/sec flow of 64-byte IP packets was used as the traffic under test on one of the ports, with a variable amount of competing background traffic from the 15 remaining OC-48 ports. End-to-end latency measurements were collected and plotted at increasing levels of background traffic (see Figure 2). The plotted simulation data shows that the observed maximum latency increased from 65% to 95% of the guaranteed maximum delay bound, as the background traffic increased from 0 to 100% of the line rate. Furthermore, the guaranteed maximum delay bound for this configuration was within a factor of 2.5 of the minimum latency estimated for the NPU/switch fabric combination, indicating excellent achieved latency on time-critical flows.

Off-the-shelf NPU and switch fabric devices based on common standard interfaces provide OEMs with a viable and cost-effective alternative to custom ASICs in designing multi-service platforms. A multi-service packet switch was described utilizing NPU and switch fabric devices interfaced via the CSIX-L1 standard. The standard allowed the NPU-switch fabric assembly to be treated as an integral unit using standards-based simulation software. This enabled the QoS capability of the NPU/switch fabric to be characterized for a representative multiservice application prior to hardware implementation. Standards-based NPU/switch fabric devices supported by comprehensive software tools are now making such off-the-shelf components an attractive alternative to custom ASIC designs in multiservice applications.

Richard Borgioli is a senior applications engineer and Raffaele Noro is a design architect at Vitesse Semiconductor (Camarillo, CA).

Fig. 2 The guaranteed maximum delay bound for this configuration was within a factor of 2.5 of the minimum latency estimated for the NPU/switch fabric combination.

Sponsored Recommendations

How AI is driving new thinking in the optical industry

Sept. 30, 2024
Join us for an interactive roundtable webinar highlighting the results of an Endeavor Business Media survey to identify how optical technologies can support AI workflows by balancing...

The AI and ML Opportunity

Sept. 30, 2024
Join our AI and ML Opportunity webinar to explore how cutting-edge network infrastructure and innovative technologies can meet the soaring demands of AI memory and bandwidth, ...

On Topic: Optical Players Race to Stay Pace With the AI Revolution

Sept. 18, 2024
The optical industry is moving fast with new approaches to satisfying the ever-growing demand from hyperscalers, which are balancing growing bandwidth demands with power efficiency...

Advances in Fiber & Cable

Oct. 3, 2024
Attend this robust webinar where advancements in materials for greater durability and scalable solutions for future-proofing networks are discussed.