Evolution of terabit-router testing

Feb. 1, 2001

Development of dynamic routing protocols adds more resilience to the network.

TONY DE LA ROSA, IXIA

The explosive growth of the Internet and the demand for more reliable routing systems have contributed to the development of dynamic routing protocols. These protocols make it possible to automatically add and delete entries within routing tables based on network conditions, thus providing a much more resilient network. Over the years, a plethora of routing protocols has emerged to address different network requirements. Fortunately, most of these protocols can be grouped into two categories: the network environment in which they operate and the type of algorithm used to create their routing tables.

The first category of routing protocols can be categorized as internal or external gateway protocols. Internal gateway protocols (IGPs) route network prefixes inside a single domain known as an autonomous system; whereas, external gateway protocols (EGPs) route between autonomous systems. An autonomous system is a network normally under one administrative control, perhaps that of a large company or a university. Small sites, however, tend to be part of their Internet service provider's autonomous system. Open Shortest Path First (OSPF), Intermediate System to Intermediate System (IS-IS), and Routing Information Protocol are examples of IGPs that route decisions within a single autonomous system. In contrast, Border Gateway Protocol-4 (BGP-4) is an EGP protocol typically used for interdomain communications.

The second routing-protocol category is defined by the type of algorithm used to generate the routing table. The two major algorithms used to classify these routing protocols are link-state or distance-vector protocols. Link-state protocols exchange information about links and nodes, which means that routers running link-state protocols do not exchange routing tables. Instead, each router inside a domain has the necessary information to run a shortest-path algorithm and build its own routing table.

On the other hand, distance-vector protocols do exchange routing tables-and in large networks, these tables can be very hard to maintain. Additionally, distance protocols can use the hop count as a measurable cost to reach a particular network. Because of the criteria used for this measurement, they are not associated with the type of interface used. Rather, a low-speed link could easily be preferred over a high-speed link, depending on the calculated hop count to reach a given destination.

Although traffic is congested (upper map), the customer attempts to provide best-effort and differentiated services. Data is sent whenever it must be sent-in any quantity-without requesting permission or first informing the network. The network delivers data-if it can-without any assurance of reliability, delay bounds, or throughput. Even though the label-switched paths may have additional hops (lower map), they nonetheless provide faster, more reliable service than the traditional shortest-path internal-gateway-protocol route.

In today's high-demand Internet environment, two routing protocols that employ the algorithms just outlined are currently receiving a great deal of attention. These protocols are BGP-4 and OSPF.

OSPF uses a link-state algorithm that floods routing information to all nodes in a particular network. With this protocol, each router sends only the portion of the routing table that describes the current state of its own links. In contrast, BGP-4 uses a distance-vector algorithm that communicates with its neighbors exclusively, which may in turn update all or some portion of their routing tables. In essence, link-state algorithms send small updates everywhere, while distance-vector algorithms send larger updates only to neighboring routers.

Manufacturers of terabit routers and Internet service providers (ISPs) have expressed interest in terabit-router emulation software. For equipment manufacturers, router metrics such as maximum routes stored and route convergence times are important. For ISP vendors, quality-of-service (QoS) is vitally important to their business model. In particular, QoS features provide better and more predictable network service by supporting dedicated bandwidth, improving loss characteristics, minimizing network congestion, and shaping network traffic. Yet, no matter how much QoS profiling is done, the basic metrics are throughput, frame loss, and latency. Hence, QoS and router performance can be evaluated by these measurements.

Following are two scenarios for real applications where terabit-router tests can simplify performance analysis and the gathering of metrics:

  • Scenario 1. A large manufacturer of OSPF routers has simulated a customer installation in a test lab. Its customer is in the process of adding additional OSPF routers to accommodate an expansion of new users. The customer has requested the manufacturer verify that the load of new users will not impact its existing network.
  • Scenario 2. An ISP vendor needs to maintain certain service levels to ensure its ability to meet the service-level agreement (SLA) it has negotiated with its customers. Traffic can be routed on their leased lines or across third-party lines. An ISP can be heavily penalized if service levels are not met. Hence, before deploying a new router, an ISP will want to verify that its SLAs will not be impacted.

Until recently, customers with testing requirements were limited in their testing capabilities. Typically, an Internet Protocol (IP) traffic generator would be used to evaluate the performance of a router. A particular test could entail transmitting IP packets with certain QoS priorities, while oversubscribing the line with IP data packets-then evaluating how the router responds. Unfortunately, this type of analysis is very limited in determining how the network will perform once it is deployed or upgraded. A more realistic test scenario would be testing an entire network.

By testing the whole system, it's possible to validate the interoperability between various routers, fine-tune QoS policies, and examine "what if" scenarios. For mission-critical applications, this approach is preferred. The disadvantages are the cost and complexity of managing such a system. For example, a customer with routers placed in various parts of the United States will find it very expensive to duplicate such an environment. In addition, making a platform change to provide additional services could be prohibited.

There is substantial complexity involved in making a change in a network configuration when network operators want to implement traffic engineering to improve the customer's SLA or QoS metrics (see Figure). In this illustration, traffic from Seattle to San Francisco is becoming congested, as is the traffic from Seattle to New York. Currently, the customer is providing best-effort and differentiated services. For best-effort service, data is sent whenever it must be sent-in any quantity-without requesting permission or first informing the network. The network delivers data if it can, without any assurance of reliability, delay bounds, or throughput. For differentiated service, a selective amount of traffic in the customer's network is coded, based on source, destination, or application requirements. The network then tries to deliver the particular kind of service, based on the QoS specified by each packet.

Some may argue that it's possible to manually manipulate IGP metrics to allow traffic to travel a non-default path. For instance, IGP metrics could be modified so the cost of sending traffic through a four-hop path is less than the cost of sending the traffic through a three-hop path. The problem is that without complex configuration changes at the ingress router, all traffic would now traverse this new four-hop path because the customer will have no ability to send different traffic over different paths. In addition, manipulation in intricate topologies is difficult and can result in network destabilization when bursty traffic takes new and possibly unexpected paths.

To implement traffic engineering, the customer must implement an explicit signaling protocol. The signaling protocol will enable applications to inform the network of its traffic profile and requests a particular kind of service that can encompass its bandwidth and delay requirements. In addition, the application is expected to send data only after it gets a confirmation from the network.

By implementing a Multiprotocol Label Switching traffic engineering protocol, the customer can shape the traffic to meet QoS metrics and SLAs. Paths with the highest capacity and the lowest congestion can be identified. Based on this information, the customer can then create a labeled-switched path (LSP) to carry the specified traffic (see Figure). Note that although the LSPs may have additional hops, they nonetheless provide faster, more reliable service than the traditional shortest-path IGP route.

Implementing such changes requires resources in capital, time, and effort. Fortunately, network-performance-analysis manufacturers have listened to customer's needs and are creating routing and signaling protocol emulation software to expedite the process of validating and fine-tuning networks.

Emulation software, for instance, is available to test high-speed and high-capacity terabit routers by creating multiple BGP sessions and generating hundreds of thousands of IP prefixes. Multiple ports can be connected to a device under test. Each chassis can simulate up to 100 peers, and each peer can be defined as an intra-autonomous router or an inter-autonomous router.

Additionally, emulation software allows the customers to construct elaborate network simulations, easily inject many link-state advertisements in a test environment, and validate routes by transmitting streams on the same port. Available emulation software can simulate interfaces from a number of routers-from internal routers, all belonging to the same area, autonomous system border routers, or those advertising external routing information.

Internet growth has dramatically transformed the network testing industry. Routers have become sophisticated and complex. Over the years, two routing protocols have emerged. BGP-4 has become the dominant protocol for routing between autonomous systems. Link-state protocols such as OSPF and IS-IS have become popular within autonomous systems. Because there is very little room for error, the challenge for a customer is to validate a system configuration prior to production.

Network test equipment manufacturers must also evolve to meet these new challenges. Short of becoming actual routers, the new generation of test equipment must be able to emulate hundreds and even thousands of routing sessions-and emulation software needs to emulate new application scenarios.

Tony De La Rosa is the product manager for terabit-router testing solutions at Ixia (Calabasas, CA). He can be reached via the company's Website, www.ixiacom.com.

Sponsored Recommendations

ON TOPIC: Innovation in Optical Components

July 2, 2024
Lightwave’s latest on-topic eBook, sponsored by Anritsu, will address innovation in optical components. The eBook looks at various topics, including PCIe (Peripheral...

PON Evolution: Going from 10G to 25, 50G and Above

July 23, 2024
Discover the future of connectivity with our webinar on multi-gigabit services, where industry experts reveal strategies to enhance network capacity and deliver lightning-fast...

The Journey to 1.6 Terabit Ethernet

May 24, 2024
Embark on a journey into the future of connectivity as the leaders of the IEEE P802.3dj Task Force unveil the groundbreaking strides towards 1.6 Terabit Ethernet, revolutionizing...

Supporting 5G with Fiber

April 12, 2023
Network operators continue their 5G coverage expansion – which means they also continue to roll out fiber to support such initiatives. The articles in this Lightwave On ...