Building a configurable silicon foundation for carrier-grade Ethernet

Jan. 1, 2011

By Tao Gu

Carrier Ethernet incorporates modifications to traditional Ethernet in support of quality of service (QoS), high availability, and service operations, administration, and maintenance (OAM) requirements. Delivering these capabilities requires a configurable system platform whose functionality is easy to modify over time and whose underlying silicon can take on increasing packet-processing work–from parsing, lookup, MAC learning, Layer 2 bridging and Layer 3 routing, to Multiprotocol Label Switching (MPLS) label swapping, tunnel encapsulation and decapsulation, and QoS processing functions, such as policing, shaping, and scheduling.

As these requirements push network processing units (NPUs) to their limits, the industry is turning to new silicon options that will enable a variety of new carrier-grade platforms–including line cards, pizza boxes, and distributed, chassis-based, packet-switching architectures–that maximize performance and flexibility while minimizing costs.

Figure 1. Flexible modes of operation for external memory use.

Understanding the Carrier Ethernet challenge

Because jitter and delay are not critical in enterprise applications, traditional Ethernet was optimized for best-effort delivery with minimal QoS and service-level agreement (SLA) requirements. In contrast, carriers must deliver 99.999% availability assurances for real-time applications like voice over IP or video, with new levels of network reliability and resilience, higher scalability to support thousands of network elements and millions of subscribers, and easy-to-use service provisioning and OAM.

To address these issues, the Metro Ethernet Forum (MEF) defined five general Carrier Ethernet requirements to enable the most cost-effective, ubiquitous, scalable, reliable, and manageable service delivery:

  • Standardized Services: The MEF has defined E-Line for point-to-point, E-LAN for multipoint-to-multipoint, and E-Tree for point-to-multipoint services. The goal is to make these services ubiquitously available and interoperable.
  • Scalability: Carrier Ethernet must easily and economically scale to a nationwide and even global footprint while supporting millions of users and various concurrent applications. Network-to-network interfaces (NNIs) require bandwidth of 10G, 40G, 100G, and beyond.
  • Reliability: Carrier Ethernet must provide adequate resiliency (<50 ms failover time) and 99.999% equipment and network reliability, similar to traditional SONET/SDH networks. Service restoration methods should include path protection, node or link protection, and fast rerouting mechanisms.
  • QoS: Carrier Ethernet must support current and future revenue-generating services with stringent SLAs. These SLAs span requirements such as excess information rate (EIR), committed information rate (CIR), frame loss, delay, delay variation characteristics, and hierarchical QoS.
  • Service management: Carrier Ethernet must support service-oriented management systems using standards-based, vendor-independent implementations, while delivering effective and efficient OAM and provisioning (OAM&P) capabilities. Network failures and faults must be detected and diagnosed quickly and automatically.
  • There are newer CE technologies to consider, such as Provider Backbone Bridge/Provider Backbone Transport (PBB/PBT), Ethernet OAM, Ethernet automatic protection switching (APS), and MPLS-Transport Profile (MPLS-TP) for access, metro, and core networks.

    The MEF has finalized MEF9 and MEF14 to standardize network services and traffic management. Carriers simultaneously must ensure scalability and reliability with the flexibility to create new revenue-generating services.

    Other factors complicate the picture further. Converged networks that must simultaneously support traditional TDM services and new packet-based services increase operational complexity. Meanwhile, service demands can vary widely across millions of subscribers who are always looking for new services at better value. Carriers must provide existing services while reserving headroom for creating and delivering new value-added services.

    Configurable Carrier Ethernet building blocks

    Most current Carrier Ethernet equipment is constrained by silicon products that are based on enterprise-class Ethernet. Costly FPGAs or NPUs are often used to deliver carrier-grade features, making it difficult to achieve the necessary scalability and availability in large network deployments.

    Also, adding new features to an NPU-based system can seriously degrade performance and make it challenging, if not impossible, to maintain wire-speed data rates. The only way to optimize performance as new features are added is to redesign the NPU's microcode architecture and/or add NPUs, which increases R&D cost and results in non-deterministic performance due to the complex software and hardware design.

    While application-specific standard product (ASSP) devices present a cost-effective, high-performance alternative with the opportunity to recycle previous designs, their downside is a lack of flexibility because ASSP workflow and feature sets are fixed.

    The answer is a new class of highperformance silicon capable of performing all packet-processing func-tions and even some management functions that were traditionally handled by software. This new class of devices must have the following capabilities:

    • NPU-like packet processing flexibility: Depending on which transport technology is used (including emerging Carrier Ethernet technologies, such as PBB/PBT and MPLS-TP), the packets transported on the wire might be encapsulated in a specific format that defines packet headers or labels. A Carrier Ethernet packet processor should have the flexibility to process packets with any combination of encapsulation headers or labels, and any depth of the layer stacking, processing a single packet multiple times if needed.
    • Distributed architecture: A carrier-grade system must deliver high performance, scalability, and availability with fast fault recovery (normally less than 50 ms). This requires a truly distributed architecture with better performance, scalability, and reliability than centralized designs. Each processor in the system must only maintain local information, which dramatically reduces memory usage and increases total system switching/ routing capacity. Multiple links between each processor and crossbar ensure a fully connected system with fast fault recovery if any system links or nodes (processors) fail.
    • Flexible modes: Carrier Ethernet requires that configurable packet processors work in several operation modes. So a single device can meet various form factor requirements, from low wire-speed access to high wire-speed core switching/ routing. Packet processors must support a broad range of Gigabit Ethernet, 2.5 Gigabit Ethernet, 10-Gigabit Ethernet, and Gigabit Ethernet/10-Gigabit Ethernet port densities for platforms ranging from line cards to pizza boxes.
    • Flexible memory allocation: Flexible memory allocation reduces system cost by minimizing on-chip memory and external memory usage. For instance, for a 256k entry, TCAM should be configurable to hold 256k MAC entries, or 192k route and 64k MAC entries, or 256k route entries. Similar flexibility is required for external memory usage (see Figure 1).
    Figure 2. PBB VPLS processing pipeline

    One of the key challenges in delivering all these attributes is how to orchestrate complex packet-processing tasks. This can be accomplished with a loopback mechanism that flexibly concatenates multiple packet-processing modules. The example mechanism shown in Figure 2 uses built-in packet lookup and processing modules that specialize in service classification, Ethernet bridging, IP routing, and MPLS labeling, switching, policing, and queuing/shaping. It can process packets in PBB VPLS format, across all necessary steps at the ingress IB-PE and egress IB-PE, with a loopback operation after each step.

    To deliver high availability, service awareness must be built into the silicon architecture. Purpose-built data structures can be incorporated into the silicon to support Carrier Ethernet service provisioning and management. Using the service ID concept, a flexible service classifier can assign a service ID to all the packets associated with a service. The service ID is then used throughout the packet processor pipeline for access control list (ACL), policing, service queue allocation, scheduling/shaping, service OAM, performance monitoring, and statistics accounting. This approach reduces system and software complexity and, in turn, cuts OEMs' system cost and carriers' capital expenditures.

    Availability would be further enhanc-ed by using a "single bit-flip" fast switching mechanism with built-in circuits that perform the protection switching action and guarantee <50-ms switching times. Other features that optimize availability would include ensuring that the same APS data structures apply to all technologies/protocols and service/tunnel/section levels, and by using a unified mechanism to support both linear APS (G.8031) and ring APS (G.8032) so the design is completely topology independent.

    Scalability is another concern. The right silicon requires adequate bandwidth (up to 100G) and, for a modular line card design, must include an uplink interface connected to a switch fabric so bandwidth can be scaled to terabit rates and higher. On the horizon are 400G packet processors that will deliver even higher bandwidth.

    Finally, advanced QoS can be ensured by using service queues with flexible mapping for any service provisioning requirement, and hierarchical scheduling/shaping at either the port, group, or queue level to flexibly deliver any desired QoS granularity.

    What is possible

    With a silicon foundation in place to deliver this scope of functionality, it becomes possible to very quickly and efficiently build centralized and distributed system platforms targeted at network requirements from the access to the core layer, with data processing capabilities up to 200 Gbps.

    Consider, for instance, the design requirements of a customizable carrier-class access/aggregation switch with both redundancy and ring protection for high-density metro access/aggregation and core network applications. If this platform were to be built using NPUs, it would require a team of hardware/firmware/microcode/software engineers 12 to 18 months to specify, develop, and test a line card. Because it is a processor-based design, it must be thoroughly tested to ensure operation at desired data rates and loading under various scenarios for deterministic behavior. In addition, interoperability and conformance to standards are a concern after the code has been developed.

    Alternatively, the use of purpose-built silicon speeds time to market by dramatically reducing the development time and resources required. Specific operation modes are fully understood and detailed in product documentation, and development is backed by established design guidelines that reduce time that would otherwise be wasted in trial and error. Interoperability and conformance are also not a concern because the silicon vendor will have already tested the product and made results available for review. There is no need to redesign microcode or add NPUs in order to deliver switching and routing designs that meet Carrier Ethernet technology and feature requirements, such as QoS, resiliency, and OAM, while delivering cost-efficient systems.

    As an example, Centec Networks uses purpose-built CE packet-processing silicon to deliver access/aggregation system functionality in a platform that supports Layer 2 and 3 Virtual Private Network (VPN) requirements, and implements all QoS, security, and Ethernet OAM protocols necessary for advanced MEF services. The system delivers wire-speed performance for IP and MPLS packet processing with advanced Metro Ethernet OAM management capabilities, and incorporates redundancy protection features into a high-density chassis design.

    Carrier Ethernet is becoming the technology of choice for service providers who must increase service revenue with low capital expenditures and operational costs. The advent of purpose-built Carrier Ethernet chipsets with NPU-like flexibility enables the flexibility, scalability, service management capabilities, high availability, and assured QoS for the next generation of customizable Carrier Ethernet platforms.

    Tao Gu is chief technology officer at Centec Networks

    Sponsored Recommendations

    Coherent Routing and Optical Transport – Getting Under the Covers

    April 11, 2024
    Join us as we delve into the symbiotic relationship between IPoDWDM and cutting-edge optical transport innovations, revolutionizing the landscape of data transmission.

    Scaling Moore’s Law and The Role of Integrated Photonics

    April 8, 2024
    Intel presents its perspective on how photonic integration can enable similar performance scaling as Moore’s Law for package I/O with higher data throughput and lower energy consumption...

    From 100G to 1.6T: Navigating Timing in the New Era of High-Speed Optical Networks

    Feb. 19, 2024
    Discover the dynamic landscape of hyperscale data centers as they embrace accelerated AI/ML growth, propelling a transition from 100G to 400G and even 800G optical connectivity...

    Data Center Network Advances

    April 2, 2024
    Lightwave’s latest on-topic eBook, which AFL and Henkel sponsor, will address advances in data center technology. The eBook looks at various topics, ranging from AI backend networks...