By Paul Mooney
Overview
It’s true that 100G Ethernet is still Ethernet, just faster. But the speed increase creates several hurdles when it comes to system test.
The clock is ticking on SONET/SDH. Projections indicate that the grand dame of telecommunications that has carried our voice and data traffic for decades could be replaced by 2016. And the hand winding the clock belongs to that once humble but now stately protocol, Ethernet.
From its inauspicious beginnings more than 30 years ago as a method to connect workstations, Ethernet has become pervasive in access and metro networks. With 40/100-Gigabit Ethernet (GbE) products less than a year away, Ethernet is now poised to dominate the end-to-end network.
The usual suspects are driving the transition—storage networks, data center virtualization, and HD video. High-speed Ethernet interfaces will be used in obvious capacity-challenged points in the network, such as Internet exchanges and service provider peering points. They will also be found in data centers for switching, routing, and aggregation, and for applications such as video on demand, medical imaging, and high-performance computing.
The optical implementations use multiple fibers or wavelengths as 10G or 25G lanes. The transmitting system splits a serial 40G or 100G stream into four or ten parallel lanes. The receiving system serializes the four or ten lanes into a single 40G or 100G stream.
And here’s where a different kind of clock comes into play. While it’s true that 100GbE is still Ethernet, just faster, there’s a lot packed into that phrase “just faster.”
On the clock
At the upper layers, that means each component in a device or step in a process must accomplish the same thing it currently does in one-tenth of the time. Consider a router, for example, which strips lower-layer information from an incoming packet, queues it, performs a route lookup, and sends it to the proper outbound queue to be packetized—while performing filtering, service-level agreement (SLA) monitoring and policing, and class-of-service/quality-of-service prioritization. In addition, a router also sets up and tears down virtual private network (VPN) connections; builds multicast routing trees; performs routing table updates for multiple protocols; maintains statistics and performance, alarm, event, and failure logs; and performs firewall and security functions, such as key exchanges, attack detection and prevention, and encryption/decryption (an enormous process task in itself).
A router with 100G interfaces must do all this at 10× current maximum speeds without dropping packets, introducing excessive jitter, compromising VPN boundaries, or reordering packets, which is especially disruptive for storage and high-bandwidth video.
From a testing perspective, the performance requirements haven’t changed. The metrics of interest remain the same. It’s just a question of whether the system under test can keep up with the bit rate. Or is it?
Actually, it’s also a question of whether the test system can keep up. To get the metrics of interest, such as packet count, loss, sequence errors, latency, and jitter, the test system must be able to deliver them.
So here’s where another clock comes into play—the clock on the test system.
A test system counts packets and computes latency and jitter by placing a proprietary signature in the test packets it generates. The signature includes a serial number and a timestamp, critical elements for calculating the metrics of interest. For example, to measure latency, the test system records the time the packet is transmitted in the timestamp field. When the packet is received, the time is noted. To calculate latency, how long it took the packet to travel the link, the test system subtracts the transmit timestamp from the receive timestamp. For this calculation to have any meaning, the transmit port and receive port must be synchronized.
Time for test
As with any measurement, the precision is important. It’s the plus-or-minus value that follows a measurement. Precision is the basis of reproducibility or repeatability. If you’re measuring the length of your vacation, a calendar is fine. But plus or minus one sunrise isn’t useful when you’re timing boiled eggs. You need a greater level of precision, a clock that can measure durations down to a minute, or even a fraction of a minute.
However, if you are dealing with single-digit nanosecond values, ±20 ns is not adequate precision. To say a number reported as 30 ns is actually somewhere between 10 ns and 50 ns is not helpful when single digits make a difference.
And here is where clock speed can create testing problems for 40/100GbE. To uniquely timestamp every packet, the resolution of the test system clock must be less than the time it takes to transmit a minimum-sized Ethernet frame of 64 bytes, a preamble of 8 bytes, and the minimum inter-frame gap of 12 bytes. Regardless of the speed of Ethernet in use, a packet can be transmitted every 672 bit times. There must be at least one timestamp clock tick for every frame. You can’t have two packets fall within a single tick.
For 10GbE, a 20-ns resolution clock works fine (see Fig. 1). It takes 67.2 ns to transmit a 64-byte frame. Every frame spans at least four clock ticks.
For 40GbE, problems begin to arise with a 20-ns resolution clock (see Fig. 2). It takes 16.8 ns to transmit a 64-byte frame. It doesn’t take long for frames to begin overlapping one tick of the timestamp clock. Latency and jitter measurements will not be accurate in this scenario. In fact, there might even be issues with counting packets.
For 100GbE, accurate measurements are impossible with a 20-ns resolution clock. It takes 6.72 ns to transmit a 64-byte frame. Every tick of the timestamp clock contains three frames.
In multichassis tests, the issue of clock resolution goes beyond generating an internal timestamp to synchronizing clocks across multiple systems. Time-based measurements such as latency and jitter involve subtracting the transmit time from the receive time. If the transmit and receive ports are on different systems, the clocks of those systems must be precisely synchronized for those measurements to have meaning.
Another test issue arises from the fact that, as mentioned previously, optical implementations of 40/100GbE use multiple fibers or wavelengths as 10G or 25G lanes that are split/combined at each end to/from a 40G or 100G stream. In fact, lanes are used at several levels in an interface. Figure 3 shows some examples of possible lane changes between sublayers in the interface. The numbers in parentheses show the number of lanes entering and exiting a layer.
Test implications
This implementation has implications for a test system. Obviously, the test system must be able to combine the lanes into a single stream of traffic at the MAC layer. And the system must provide per-port metrics on the aggregated 40G/100G traffic as a single entity, not as traffic from individual lanes.
There is no requirement in the specification for a static mapping of virtual lanes to physical lanes. Lane swapping can occur. The test system should report any swapping performed by the system under test. In addition, the test system must have the capability to deliberately swap lanes to verify the system under test can compensate.
In addition, as in 10G interfaces earlier, the introduction of multiple lanes per link introduces the problem of lane skew, variations in flight time between lanes introduced by imperfections in the electrical or optical interfaces or media. The IEEE specification addresses skew with alignment blocks. The sending system inserts an alignment block into the 40G/100G stream periodically, before it is split, to allow the receiving system to identify bits from each lane that should be arriving simultaneously. The receiving system maintains alignment between the lanes by compensating for any inter-lane skew.
As with any other aspect of a system, skew tolerance and compensation algorithms must be validated. The test system is used to introduce skew from the transmit port. On the receive port, the amount of skew present after compensation by the system under test is reported. The IEEE specifications indicate the amount of skew for which a system should be able to compensate. Skew testing compares the ability of the system under test to support the standard, or report the degree to which it doesn’t match the specification.
Lab managers who are preparing for 40/100GbE must assess their current test systems to verify that they are capable of providing the metrics required to test the new speeds.
Links to more information
LIGHTWAVE: Choices Emerge for 40G and 100G Applications
LIGHTWAVE: Analysts: 40G Should Flourish during Wait for 100G
LIGHTWAVE ONLINE: Systems Vendors See No Time to Wait for 100 Gbps
Paul Mooneyis product manager, 40/100 Gigabit Ethernet technologies, Spirent Communications (www.spirent.com).