Meeting traffic demands with next-generation Internet infrastructure

May 1, 2001

SONET/SDH, ATM, IP

Router/switch line cards with higher bandwidth and better reliability will be key to building future Internet networks.

ANTHONY GALLO, Silicon Access Networks

Internet users are looking for fast-moving networks that connect swiftly and easily. Today's networks have moved beyond e-mail and Web browsing. Internet users want to extend their reach through virtual private networks (VPNs), multimedia applications, and other bandwidth-hungry applications.

As Internet use continues to grow in both the private and business sectors, the bandwidth and processing demands placed on service providers will require rapid deployment of multiple 10-, 20-, and 40-Gbit/sec router/switch ports or line cards to meet traffic demand. These line cards must support a wide range of requirements for applications such as 40-Gbit/sec pipes with minimal packet processing in the core of a service provider's network or 1- to 10-Gbit/sec pipes with deep packet processing at the edge between two service providers.

Additionally, these new line cards must be "industrial strength" in terms of reliability. Providing only the raw bandwidth processing and deep packet processing is not enough. Today's Internet equipment lacks the maturity and reliability of its carrier-class telecommunications predecessors. People demand reliable phone service. There are some important bandwidth and reliability issues associated with building next-generation Internet equipment. Specifically, there are fundamental building blocks needed to build high bandwidth and reliable router/switch line cards.

The Internet was originally designed as a best-effort, low-bandwidth transport for individuals and groups to share information. The information was primarily electronic mail and relatively small data files by today's standards. With the explosive growth of the Internet, its applications are quickly evolving-more importantly, so are expectations. Sharing electronic mail and data files will become just a fraction of Internet use, and users will expect the Internet to be as reliable and useful as any other common utility.

Applications demanding high bandwidth, such as video, require large amounts of bandwidth on a human time scale. Consider, in the not-so-distant future, movies being downloaded from the Internet, replacing the need to drive to a local video-rental store. For that to be feasible, not only is there a bandwidth factor, but also a time factor involved. How long would the average person wait to download a movie? A high- action DVD film consumes about 6 to 10 Gbits of data. A standard 56K modem would require about two weeks to download it.

Cable modems and DSL reduce this time to about 12 hours. A 1-Gbit/sec link would take less than 2 minutes, about the time it takes to rewind a movie. Obviously, time would be the motivating factor for using this service, assuming other issues such as cost and selection remained relative. The amount of bandwidth needed to solve this application is significant.

Today's networks are weighing down under the onslaught of data traffic, which is leading to Internet congestion. While there are conflicting views on the growth rate of Internet traffic, it's safe to say the Internet has more then doubled every year for the last five years and is showing no signs of slowing down. Increased bandwidth at the edge and new applications are contributing to this impressive growth rate. Millions of business and home users are signing up for high-speed Internet services such as cable modems and DSL, which increase bandwidth by a factor of 50 times.

The Internet infrastructure is unable to support this onslaught of business and non-business application traffic, while providing the guaranteed quality-of-service (QoS) levels required for many business applications. Without high-performance core and edge routers with high-bandwidth interfaces having robust QoS features, service providers cannot meet these needs.

High bandwidth is just one requirement. Low-bandwidth and low-latency applications, such as voice, are being widely deployed and require a predictable service level. If the Internet will ever carry any significant amount of phone traffic, the reliability of this service will need to be equivalent to today's expectations of phone services. Users automatically expect to hear a dial tone and have a call reach its destination. They also expect the conversation to last for the duration of the call and not be interrupted by a lost connection. These are problems that still plague wireless providers today and why landline phones are in no danger of becoming extinct in the near future.

Figure 1. The problem for system vendors is that 90% of the component building blocks needed to build high-speed line cards are difficult to develop in-house.

The Internet will continue its evolution as long as customer expectations are met. Customers demand the reliability and features offered by the existing telephony industry, but with Inter net flexibility. Service providers must meet these expectations by providing more than just best-effort services. Therefore, new technologies will be the driving force for the next-generation Internet infrastructure.

Looking at the problem from 30,000 feet, it appears simple. Service providers need to buy boxes from networking system vendors. System vendors need to build or buy the system components to build the boxes. The problem is that 90% of the component building blocks needed to build reliable 10-, 20-, and 40-Gbit/sec line cards are difficult to develop (see Figure 1). How many PC companies also build their own processor? A very strong correlation exists between the evolution of the PC industry and the evolving networking equipment industry. As networking technology becomes more difficult to develop, a new specialized industry will emerge to meet the demand.

Figure 2. This new specialized industry will provide the networking building blocks for developing high-bandwidth, industrial-strength core and edge routers/switches. These routers/switches will span the globe by using fiber-optic technology to connect line cards from one system to another.

This new, specialized industry will provide the networking building blocks that allow system vendors to build high-bandwidth, industrial-strength core and edge routers/switches. These devices will be linked together forming the backbone of the next-generation Internet (see Figure 2). The routers/switches span the globe by using fiber-optic technology to connect line cards from one system to another. The Internet actually comprises several disjointed and overlapping networks run by multiple independent service providers. These service providers come together at various points in the network to form peering points.

Terabit routers accept hundreds of 1- to 10-Gbit/sec line-card interfaces. These interfaces are expected to support the most advanced QoS features as well as deliver almost 100% carrier-class availability. Networking line-card components are at the foundation of this problem in terms of processing power and reliability. If these components are unreliable or incapable of transferring the offered bandwidth, the system as a whole will not meet expectations.

There are several ways to solve this problem. Some ways involve highly integrated, single-chip solutions with a "one size fits all" model. Others, at the opposite extreme, require many components. Each of these solutions has advantages, but also significant limitations, when trying to address both scalability and reliability. Single-chip solutions do not scale well and are difficult to merge with existing networking equipment. There already are generations of Internet products shipping today and these products are based on more infrastructure than just line-card components. This existing infrastructure needs to be maintained initially and evolve over time to produce industrial-strength products. Any solution that dictates a forklift approach, where the existing infrastructure needs to be discarded, will create a new problem for system vendors and service providers.

For networking components to be useful, they must solve problems with well-defined boundaries and have standard interfaces. Problem decomposition is key because it allows one piece of the problem to scale while keeping the rest of the system intact. The power and physical area consumed by each of these components needs to be as important as the function provided by the device. The sum of individual components, each having a disproportional power and area requirement, will in turn cause problems at the system level. The functional decomposition must address power and area at the line-card level. Embedded smart memories are needed to address bandwidth and reliability for components to reach 40-Gbit/sec-and some day, 100-Gbit/sec-speeds. These smart memories have to do more than just address bandwidth and power. Designing in reliability at the embedded memory level is key to producing reliable components.

Core and edge networking products must deliver low latency and guaranteed delay variance to support real-time traffic. These products must be carrier class and provide full redundancy and high availability to support mission-critical and real-time applications. It is imperative that these products provide the ability to reboot software modules and upgrade or reconfigure routing code, while maintaining live network sessions.

Figure 3. Seven key components are used to make up the data path of the line card. The components can handle both inbound and outbound traffic flows and use embedded smart memory rather than external memories, with the exception of the traffic-manager component.

One such line card solution is shown in Figure 3. Here, the problem is divided into seven key components that are used to make up the data path of the line card. The components can handle both inbound and outbound traffic flows and use embedded smart memory rather than external memories. The only exception is the traffic-manager component. It has embedded memory and makes use of cheap bulk memory for storing the packet as it flows through the line card. This cheap bulk memory can then be protected using techniques such as error-correcting codes to provide reliability.

The network-interface component provides standards-based link-level connection to the external world. Common networking interfaces include Ethernet and packet-over-SONET (PoS). The line-card-to-line-card interface can take many different forms, most of which are proprietary and beyond the scope of this article. It provides the internal path for line cards to talk to one another. A system comprises several, hundreds, or even thousands of line cards, depending on the applications. The purpose of the remaining components is to ensure that the incoming network traffic is directed to the appropriate outbound line card. This function is commonly referred to as routing or switching network traffic.

The packet processor component performs the sequence of operations on the packet as it enters the system from the network side or the line-card-to-line-card side. This device verifies the packet integrity and performs several operations based on the format of the packet. Many of the operations performed by the packet processor are handled internally to the device, while some are dispatched to other components in the system. Search requests, or lookups, are operations typically dispatched by a packet processor.

Typically, multiple types of lookup operations are performed on a packet as it traverses the line card in both the inbound and outbound direction. These lookup operations can take many forms but fall into two major categories: address lookups that identify a specific destination and classification lookups that refer to a grouping of similar packet types. The address processor component in Figure 3 performs the lookup operation where a destination Internet address resides.

Current Internet devices hold about 100,000 addresses per line card and next-generation line cards will typically hold one million entries. The classifier component in Figure 3 is used to identify a set of policies that should be performed on traffic flowing through the line card. For example, these policies identify actions such as permit, deny, or assign a priority to a given flow or set of flows. These operations need to be monitored for management and billing applications.

The accountant component in Figure 3 maintains the counters that need to be maintained. The traffic-manager component shapes traffic and allows for many QoS levels. The traffic manager is needed for both the inbound and outbound data path. In the inbound data path, the traffic patterns to the outbound line card are unknown and commonly form a many-to-one traffic pattern where many inbound line cards are all sending traffic simultaneously to the same outbound card. For the system to be reliable, there is a need to buffer this traffic, much like on-ramps buffer cars until there is time to merge on to a busy expressway. The same situation occurs in the outbound direction as traffic attempts to leave the line card. The transmission line or the destination device on the other end may be in a state where it temporarily cannot receive the offered traffic load. The traffic manager shapes the inbound and outbound traffic to a manageable level, both internal to the router/switch and external between multiple routers/switches.

The existence of the outbound packet processor component in Figure 3 needs to be optional, since the need for this component on the outbound data path highly depends on the amount of packet processing required by the line card. To ensure this component can be optional, the traffic-manager component needs to have a limited subset of the features found in the packet-processor component, such as the ability to perform lookups and packet modifications. The traffic manager also needs to address scalability, since the QoS requirements for traffic management internal to the device are not exactly the same as between multiple routers/switches.

Service providers need bandwidth, performance, and reliability to handle today's and tomorrow's emerging protocols. Although advances in fiber throughput and optical internetworking technologies have been dramatic, service providers are still faced with the bottlenecks between service-provider edges and core routers (see Figure 4). The optical-network layer is currently operating at OC- 48 (2.5-Gbit/sec) and OC-192 (10-Gbit/sec) speeds and does not offer corresponding interface speeds and port densities to effectively harness the bandwidth windfalls created by fiber and DWDM technologies. That's like trying to drink out of a fire hose.

Figure 4. Although advances in fiber throughput and optical internetworking technologies have been dramatic, service providers are still faced with the bottlenecks between service-provider edges and core routers.

As service providers deploy fiber and DWDM equipment to expand capacity, terabit-routing technologies are required to convert massive amounts of raw bandwidth into usable bandwidth for the service-provider edge and core. With optical-interface speeds greater than the switching capacity of routers themselves, gigabit routers run out of bandwidth. The optical technologies are available now to improve raw capacity but require terabit routers and switches to provide the performance and port density required to efficiently convert raw capacity into usable bandwidth.

Next-generation terabit routers and switches will need the ability to support existing data transport line cards to ensure interoperability. Service providers have deployed various Internet protocols on technologies such as DS-3 (44.736 Mbits/sec), OC-3 (155 Mbits/sec) and OC-12 (622 Mbits/sec). The next generation of routers and switches must provide a graceful migration for service providers wanting to leverage existing hardware as well as employee training.

Other ownership considerations, such as power usage, are also important factors to service providers. Over 50% of a service provider's yearly operating expenses can be attributed to power consumption for operating and cooling networking equipment. Because service providers must pay employee salaries and other expenses, much less than 50% is available for capital expenses, such as new networking equipment. Cost of ownership in terms of power consumption, physical space, employee training, and even lost revenue due to downtime, must be addressed, in addition to the actual expense of the equipment.

Internet Protocol (IP) is the most pervasive and widely known protocol in the world, and most applications use IP or are in the process of migration. Voice over IP (VoIP) is one such application. IP services must be flexible, available on demand, and easy to use. Moreover, they must become more reliable than existing offerings such as frame relay and leased lines.

SONET has been the medium of choice for delivering services over the metropolitan-area network (MAN). SONET uses time-division multiplexing (TDM), a technology that many experts think is outdated and incapable of delivering the high-speed data services users are demanding.

Over the past 10 years, service providers have installed fiber-optic cable throughout every major metro area in the United States and have standardized the use of SONET equipment. However, SONET's TDM structure may be too rigid for flexible provisioning or bandwidth sharing. SONET has prevented service providers from offering high-speed, end-to-end data service at attractive rates.

It appears that the future will have IP as the protocol of choice for applications, because IP is an open and continually evolving standard. In the service-provider edge and core, routers' Multiprotocol Label Switching (MPLS) will also play a significant role. However, MPLS was created to compliment IP-based networks and is more of a companion technology than a competing one.

At the enterprise, Ethernet is the interface of choice and many service providers are starting to migrate to Ethernet-based networks. Ethernet is already commonly used in the peering points between service providers. While the debate is still raging between Ethernet and SONET technologies, history has shown that the simpler technology has always won. In any case, both technologies will be with us for several years.

To meet customer requirements such as bandwidth provisioning, accounting, and billing for premium services, there will be a need for very fast, intelligent routers/switches utilizing OC-192/10-Gigabit Ethernet (10-GbE) and in the not-so-distant future, OC-768 (40 Gbits/sec) and 100-GbE links. The next-generation, packet-processing line cards have a fundamental need for large amounts of memory and memory bandwidth.

Memory is needed to store policing contexts, assembly contexts, QoS packet-descriptor entries, routing entries, multifield classification entries such as access control lists (ACL) and various other state and control information. If it was just a matter of the quantity of memory, a standard embedded dynamic random-access memory (DRAM) product might be effective. However, memory size is not the only factor. The speed at which the memory can be accessed and the overall reliability are also crucial. Unfortunately, commercially-available, embedded memory does not address these requirements.

Network product original-equipment-manufacturer (OEM) companies are now shifting their efforts to the service-provider space. There is a need for bandwidth and highly intelligent, reliable processing. The following drives the shift in focus to this space:

  • The remote-access market has seen significant growth over the last few years in the number of users and in access bandwidth.
  • Companies are switching to VPNs, which means that more and more traffic is destined for the Internet.
  • Networks need to deploy service-level policies for continued growth.

Currently, the main problem encountered by network equipment OEM companies trying to provide intelligent terabit routers and switches is in obtaining the required underlying technology. The main cost of deploying a 10-Gbit/sec line card using standard technology offered today is not only the component price of the solution, but also the cost in terms of power budget, chip size, chip count, pin count, and physical board space.

The major requirements of a 10-Gbit/sec line-card chipset that would be used in an intelligent terabit router would include the following:

  • Total line-card power dissipation of less than 150 W.
  • Up to two million forwarding entries for IP, VPN, and multicast protocols. (This requirement has doubled from one million forwarding entries in just the last six months.)
  • Tens of thousands of ACL entries.
  • Tens of thousands of policing contexts (needed for traffic management).
  • Thousands of queues (needed for traffic buffering and shaping).
  • Thousands of schedulers for releasing packets from the queues.
  • Deep packet processing, from Layer 2 to Layer 7.
  • Tens of thousands of label-switched paths for managing MPLS tunnels.
  • Wire speed for minimum-length packets.

Until now, external static random-access memory (SRAM) and DRAM have been the answer for providing both the required memory size and speed, but they do not address reliability. It is the path taken by first-generation 10-Gbit/sec line-card solutions. Embedded SRAM is the only viable option for standard application-specific integrated-circuit (ASIC) methodologies. External SRAM use requires a significant amount of board space, which is at a premium for system designers. These external SRAM-based solutions also create input/output skews and signal-integrity issues, and those issues grow exponentially as board-space area is increased.

What's needed to solve this problem is a combination of embedded smart memory and external-memory building blocks. Having these building blocks and, more important, knowing how to use these building blocks is key. Smart embedded-memory technology will provide one of the necessary building blocks needed to provide a line-card chipset solution that will address carrier-class requirements. Smart embedded memories will have the speed and reliability built in to ensure on-chip information is not lost during packet processing.

Current OC-192 line cards are priced above $200,000, but the price must come down 40% or more a year to support the market growth. Even with these reductions, the price target for 10-GbE line cards for the MAN and enterprise markets cannot be met. The 10-GbE line-card market will need to reach the cost point of a 1-GbE port to be widely deployed. Initially, these cards will cost 10 times that of a 1-GbE port, but with enhanced classification features and high programmability. However, this price will drop quickly to four times the cost, if past Ethernet pricing is any indication of future 10-GbE.

A new class of 10-Gbit/sec line-card chipset is required that will meet all of the technical and business needs mentioned. This solution will be based on smart memories, which will enable very fast, flexible, reliable, and cost-effective 10-Gbit/sec chipset solutions. Explosive Internet growth and increased bandwidth demands at the edge of the network are driving this need.

Terabit networking is tailored toward service providers that need to develop winning strategies to migrate to next-generation networks while delivering new levels of bandwidth, throughput, QoS control, and reliability. Successful service providers will meet the core and edge challenge of creating and managing scalable networks with greater flexibility, dependability, and capacity while reducing capital and operational costs. Providers need to deploy the increased bandwidth required from such applications as the migration of private networks to VPNs to support the projected traffic demand through the coming decade. Two key pieces to this evolution are 10-Gbit/sec PoS and Ethernet.

Anthony Gallo is director of product marketing for Silicon Access Networks Inc. (San Jose, CA). He can reached at [email protected]. For more information about the company, visit www.siliconaccess.com.

Sponsored Recommendations

How AI is driving new thinking in the optical industry

Sept. 30, 2024
Join us for an interactive roundtable webinar highlighting the results of an Endeavor Business Media survey to identify how optical technologies can support AI workflows by balancing...

Advances in Fiber & Cable

Oct. 3, 2024
Attend this robust webinar where advancements in materials for greater durability and scalable solutions for future-proofing networks are discussed.

The AI and ML Opportunity

Sept. 30, 2024
Join our AI and ML Opportunity webinar to explore how cutting-edge network infrastructure and innovative technologies can meet the soaring demands of AI memory and bandwidth, ...

Data Center Interconnection

June 18, 2024
Join us for an interactive discussion on the growing data center interconnection market. Learn about the role of coherent pluggable optics, new connectivity technologies, and ...