Data Center Evolution and the Need for Testing

Sept. 12, 2018
Modernizing the network architecture is required to achieve higher data rates, increase port counts, and lower cost per bit. Of course, this growth can be a complicated process in an era of rapid technological change. Data center operators need to understand the technology and tools they can utilize to design, install, and maintain new networking products. Let’s take a closer look at some of the new technologies available to the data center and the network testing requirements that will play an integral part of successful evolution and operation.

Data centers need to continually evolve to support increasing bandwidth demands and reduce operational and management costs. Modernizing the network architecture is required to achieve higher data rates, increase port counts, and lower cost per bit.

Of course, this growth can be a complicated process in an era of rapid technological change. Data center operators need to understand the technology and tools they can utilize to design, install, and maintain new networking products. Let’s take a closer look at some of the new technologies available to the data center and the network testing requirements that will play an integral part of successful evolution and operation.

Availability of New Ethernet Port Speeds

The IEEE took 35 years to develop and ratify six Ethernet standards (10 Mbps through 100 Gbps). Currently, an additional six Ethernet standards have either recently completed development or are in their final stages (see Figure 1). These new standardized port speeds range from 400/200G optical Ethernet for high-speed router and switch interconnects to 5G/2.5G rates for increased capacity reusing existing Cat 5e/6 copper cabling.

Figure 1. Although some time is likely to pass before some of these standards see broad deployment, data center designers need
to prepare now to keep up with bandwidth demand and technology upgrades that will enable them to remain competitive.

Within the context of creating modules for these transmission rates, the physics of semiconductor materials limits the achievable clock rates. To build equipment capable of realizing the new high-speed communication rates and standards, network equipment manufacturers apply a variety of techniques, including multiple modulation formats.

For many years, the primary modulation format has been non-return-to-zero (NRZ) modulation. An example of this is 100G Ethernet ports supported with the common QSFP28 pluggable optical module that supports 4x25G NRZ high-speed data lanes.

In an attempt to increase bit rates without resorting to the complexities of coherent modulation, the industry has moved toward four-level pulse-amplitude modulation (PAM4; see Figure 2). The approach delivers twice the bit rate compared to NRZ modulation. On the downside, doubling the number of amplitude levels decreases signal-to-noise ratio, making accurate detection and demodulation more difficult. This factor increases the importance of compensation techniques, making forward-error correction (FEC) mandatory for new Ethernet interfaces supporting PAM4.

While the new 400/200/100/50G Ethernet standards utilize PAM4 50G capable high-speed data lanes to support the port rates, there are still variants that may still use 25G NRZ or even 100G. For example, a 400-Gbps Ethernet interface can be realized using eight lanes at 50 Gbps using PAM4 modulation or four lanes at 100 Gbps (see Figure 2).

Figure 2. Higher signaling rates and more parallel data lanes.

Maximizing faceplate density is essential, particularly in the data center. An industry goal is to support thirty-six 400-Gbps ports in a 1U Ethernet switch. This has led to the development of a number of new form factors for optical transceiver modules (see Figure 3). Although the classic small form-factor pluggable (SFP) and the quad small form-factor pluggable (QSFP) modules remain the workhorses of the industry, emerging form factors include a quad small form factor pluggable – double density version (QSFP-DD) that delivers 400 Gbps. The QSFP-DD port has dimensions similar to those of a QSFP28 and is backward compatible with 100G QSFP28 and 40G QSFP+ form factors and data rates.

Figure 3. Emerging pluggable optical transceiver developments support new port rates and flexibility.

Flex Ethernet Standardization

Flex Ethernet (FlexE), standardized by the Optical Internetworking Forum (OIF), is a new link aggregation method designed to decouple the Ethernet MAC client interface rates (10G, 40G, and the new Nx25G client) from the physical interface or PHY rate, which connects routers and transport boxes (see Figure 4). This mechanism enables Ethernet connectivity between high-speed devices such as routers and optical transport equipment in a manner independent of the physical interface between the equipment (the MAC client rate may not match the physical port rate).

The benefits of FlexE are improved end-to-end management and network efficiency, with the flexibility of adjusting the service bandwidth as required. OIF released the first FlexE implementation agreement, IA OIF-FlexE-01.0, in 2016; the 2.0 agreement is expected by the end of 2018.

Figure 4. FlexE aggregation of various Ethernet MAC client rates.

What Do We Need to Test?

Amid this ever-evolving technology landscape, data centers have to design and build infrastructure and keep it running. Data center operators require specialized test and measurement equipment to qualify the design, installation, and monitoring of these new technologies as port rates and optical modules change.

Traffic Simulation and Measurement for the New Port Rates

Test equipment must be capable of supporting effective traffic simulation and measurement at the new port rates and standards. Remember, the IEEE already has six Ethernet standards in place. The 400G Ethernet standard, one of the six new standards, was ratified in December 2017; in fact, only IEEE 802.3cd (for 50 Gigabit Ethernet, multimode 200 Gigabit Ethernet, and a new “cost effective” version of 100 Gigabit Ethernet) remains to be completed of the new six. Effective testing of connections based on new and “original six” Ethernet requires accurate and consistent traffic simulation at the data rate of interest and accurate, high-resolution measurement of the results. This is particularly important given that most networks use a mixture of data rates, depending on the specific bandwidth requirement of the interconnect (see Figure 5).

Figure 5. Data center operators require flexible traffic generators and analyzers to test multiple port rates and interfaces.

Interoperability and Standards Compliance

Before rolling out new switches and routers that take advantage of faster port rates and new technologies, the equipment must be tested for interoperability in the network and verified for standards compliance.

Network Traffic Verification

We can split Ethernet/IP traffic verification into multiple parts: validating the physical coding sublayer (PCS), the FEC layer for PAM4 and evaluating data exchange in the Ethernet/IP layer.

The PCS is the top level of the PHY layer. It repackages the raw data of the PHY layer to interface with the media-independent interface. Verifying the PCS includes checking lane skew/latency, lane misalignment, and lane swapping.

The process of validating the Ethernet/IP layer includes checking key performance indicators like throughput, frame loss, latency, and jitter, as well as frame-size (MTU) performance. The RFC 2544 and Y.1564 industry standards specify the parameters subject to test and the protocols for evaluating them.

FEC BER Performance

It is mandatory for the new Ethernet standards using PAM4 to support FEC. PAM4 implementations typically use the KP-FEC, which follows the Reed Solomon RS-FEC (544) algorithm. This FEC supports the correction of up to 15 single bit errors or up to 150 bit burst errors. It is critical to characterize the FEC and signal quality performance of switches, routers, optical transceivers, and interconnect cables. It is also beneficial for test equipment to manually inject errors to verify the FEC layer performs the proper bit error correction to maintain performance objectives.

Optical Modules and Interconnect Verification

Pluggable transceivers, especially first-generation products, may be a source of failure in the network as modules are getting smaller and more complex to accommodate the higher bit rates. These modules need to be evaluated prior to deployment to ensure that they meet specifications. Key characteristics include proper thermal cooling, BER performance, and optical power level, both in terms of output and received power. The programming and read-write operation of the MDIO and I2C registers should be verified. It is also important to check input power tolerance and the overall power consumption of the module. Finally, the line clock thresholds of the high-speed lanes need to be evaluated.

All cabling interconnecting the new port rates, whether optical fiber or direct attached copper (DAC) cabling, needs to be tested to guarantee proper operation.

Flex Ethernet Layer Verification

New FlexE deployments will require comprehensive testing to ensure proper equipment performance and service delivery. Test equipment must be capable of simulating and monitoring the various FlexE client types, including the new variable Nx25G option, over various FlexE PHY port rates such as 100 Gigabit Ethernet (GbE).

Testing must include verification of the new FlexE layers such as the TDM shim layer, which aggregates and distributes the Ethernet clients over multiple PHYs. A 100GbE PHY is capable of supporting up to 20 independent 5G channels of data.

The management overhead layer will also need to be verified to ensure proper response to network conditions including proper identification and response to network alarm and failure events. There are also management communication channels defined in the overhead layer, which may be used for end-to-end communication between FlexE equipment and must be verified before placed in operation.

The new FlexE layers must be properly configured and proven to ensure the proposed management and bandwidth efficiency gains are obtained.

Additional Test Equipment Requirements

The new network paradigm, coupled with rapidly changing hardware and protocols, puts special demands on data center operators and the test equipment they use. Engineering and operating the systems are difficult enough - test equipment should be able to meet the technical challenges and simplify the process.

For starters, equipment must be capable of testing components and systems at the new port rates. Optical networking is not a “one-speed-fits-all” proposition. The optimal data rate differs depending on the function, budget, and even age of the network. As it’s financially impractical to buy separate test gear for each speed, equipment must be multifunctional so that it can be used throughout the network as required.

Similarly, the equipment needs to be able to accommodate multiple pluggable form factors. The ideal platform is built around pluggable modules that enable new features and ports to be added when required. This “pay-as-you-go” approach enables test equipment to adapt to the evolving technology.

Traffic generators and analyzers should be designed to provide flexible, high-density traffic generation. It is also useful to have equipment capable of multi-port traffic generation and analysis for high-density and aggregation applications.

Finally, ease-of-use should not be underestimated. Easy configuration speeds set up. Automated test increases repeatability and reliability of results. Installing and maintaining the network may be challenging but the test aspect of it does not have to be.

Conclusion

From software to hardware, port rates to optical modules, the data communications industry is in a radical state of flux. New business models are forcing data centers to become more efficient and expandable. The new hardware and software trends demand the availability of flexible instrumentation to support new product development and network operation. Existing test equipment toolboxes need to be refreshed to be able to support the new technologies that are being deployed. Although some new high-speed network technology may not see broad adoption immediately, data centers need to prepare for the future. The right test equipment will help them do just that.

Keith Cole is vice president of product marketing at VeEX (Fremont, CA).