Growth Drivers for Data Center Networks

Aug. 25, 2020
It is necessary to incorporate the latest technologies to accommodate the rapidly changing needs of data centers to ensure the most advanced communication infrastructures and hyperscale data centers are possible.
The top 10 cloud service providers spent over $60 billion on data centers in 2019, and this is only expected to grow, according to Dell’Oro Group. One critical driver for continued growth will be a rise in investment for hyperscale data centers (HDCs) to accommodate data processing needs across the globe.

Modern society is accustomed to constant connectivity and communication through cloud computing, social networking, multimedia streaming, mobile usage, telehealth, and everything in between. Once considered “nice to have” or an optional amenity for many people, these services have now become an expectation, making the development and production of optimized optical interconnect that ensures the seamless operation of data centers a central focus area for today’s market.

With massive amounts of data traveling through optical communications networks each day, the need for high-performance networking (HPN) continues to increase. It is necessary to incorporate the latest technologies to accommodate the rapidly changing needs of data centers to ensure the most advanced communication infrastructures and HDCs are possible.

Hardware and interconnect evolution

Imagine one data center facility consisting of hundreds of thousands of interconnected servers, each one processing and transporting massive amounts of data. The continued innovation in switching, routing, server, and interconnect hardware enables data transfer to happen at a higher level than ever before, resulting in enhanced computing efficiency that enables new and exciting cloud-based applications and services.

Modules that enable optical connectivity between all of this equipment must keep pace with the massive data needs of the switches and servers within the data center. The networking architecture of a data center is based on how it most efficiently delivers the intended services. For HDCs, there are also the practical limitations of available services like access to low-cost electrical power and long-haul optical networks. Smaller data centers can be specialized for specific applications such as gaming, artificial intelligence, or edge computing tied to 5G wireless application requirements.

The key hardware enablers in deploying a new data center are usually the servers and the switches. The servers do the heavy lifting of the data center while the switches enable the multilevel server and storage interconnects that make data centers so effective.

Another hardware aspect in the data center is the interconnection links between the servers and between servers and switches. The typical data center includes racks of servers in a row interconnected by switches at the top of each rack; the rows are interconnected by additional switches at the end of each row. Interconnects provide networking links required to enable the data center to function. While the architecture of the data center depends critically on the servers and switches, it also depends on the cost, reach, and throughput of these interconnects.

As the compute power and data rates of the servers and switches have increased, the technologies used within the interconnects has evolved. Due to electrical signal losses at higher speeds, passive electrical cables, which in previous generations enabled tens of meters of interconnect, are now relegated to a few meters. Active electrical cables were considered to add electrical amplification. However, the significant cost and power needed were not worth the minimal increase in reach. Low-cost optical fiber-based interconnects that can enable up to 10 km are now widely used within data centers, but also face the requirement of meeting data center cost, power, and latency needs.

Optical interconnects enable long reach, high-speed communications. As higher-speed interconnect requirements have come to other markets, the opportunity for optical interconnects has flourished. Low-cost optical interconnects based on amplitude modulation (non-return to zero or NRZ) are standard in access, wireless, and data center applications up to 25 Gbps per channel. In these optical links, the performance of the link is critically dependent on a clock and data recovery (CDR) function that resets the signal and clock to ensure proper link performance. These CDRs are low-power and low-cost analog devices that have been key to the advancement of high-volume optical interconnects.

A few years ago, the roadmap of requirements for data center servers and switches included the need for 50 Gbps per channel and 100 Gbps per channel interconnects. At that time, the bandwidth or speed performance of the optical components made doubling or quadrupling of the link performance unlikely using the tried-and-true NRZ coding. The solution was to go to four-level pulse amplitude modulation (PAM4), whereby within the same timeslot that NRZ would send a 0 or 1 level, PAM4 would carry a 0, 1, 2, or 3 level. This effectively doubled the information rate within the same timeslot.

The tradeoff was in the signal-to-noise ratio of each level. This was a challenge for the optical link budgets, as the poor transmit signal-to-noise ratios had to be compensated for by the receiver to close the link. Solving this problem led to the development of digital signal processing (DSP) ICs. DSPs used high-speed analog-to-digital and digital-to-analog converters to enable additional digital processing of the signal to compensate for the poor signal-to-noise in the link. However, the DSP technology added power and latency over previous generations and required advanced silicon processes to deliver the needed high-speed performance, which added cost to the overall solution.

Back to the future

To meet the ultimate needs of the data center, the PAM4 optical modules used in interconnects would benefit from the proven approach used at 25 Gbps: low-cost optics and low-power, low-latency analog CDR technology.

A group of companies, recognizing the potential benefits of such an approach, formed a new multi-source agreement (MSA) called the Open Eye MSA. The MSA's goal is to develop a full ecosystem of optics, ICs, modules, and test equipment that can be used by data centers. The Open Eye MSA intends to recognize and benefit from the improvement in optical components and establish transmit and receive specifications that ensure link performance. As done in previous NRZ systems, the transmit eye can easily be tested using an eye mask. This robust manufacturing test methodology is low cost and ensures the interoperability of different optical modules. The Open Eye requirements are only based on these eye-mask requirements and enable interoperability of all modules using either DSP or analog CDR technologies.

Data center traffic comes from the mass consumption of data and technology from everyday life, with a majority of traffic occurring within individual data centers as complex searches and compute activities conclude. The compute power of a data center greatly exceeds any capability of a personal device. This enables many new capabilities, including artificial intelligence (AI), to offer highly desirable services to consumers such as image or facial recognition and robust decision making based on large data sets. The use of AI requires even more interconnects within a single data center due to the parallel compute algorithms used and the latency, or time delay, of a data center’s interconnects, which affects the overall performance.

Implementation of analog CDR PAM4 optical interconnects will help alleviate some of the common problems data centers face. The reduced latency of analog PAM4 solutions is a key benefit for AI and for high-performance compute data centers where reducing CPU wait times greatly enhances the overall compute efficiency. The lower power of analog PAM4 CDR designs provides additional flexibility in determining how best to utilize the data center's overall power budget. Reduced power for interconnects may enable network switches to use more straightforward, cost-effective cooling techniques. In turn, data centers will be able to deliver more compute capability with increasing longevity and reliability.

Paths forward

Going forward, the continued advancement of optical components will enable analog PAM4 CDR technology to deliver 100 Gbps per channel in the future, providing low-power, low-latency technology for 400-Gbps, 800-Gbps, and greater optical interconnects. As PAM4 signaling enters other markets such as 5G wireless, the low latency of analog CDR technology will be a key enabler to real-time applications such as autonomous driving. Thus, the long-term roadmap of analog CDR technology aligns with the fundamental needs of the data center and other markets for low power, low cost, and low latency.

Data centers continuously advance their architecture and optical communication needs but will always call for durable, high-performance approaches that optimize operations. DSP-based designs have a higher power, latency, and cost requirement than analog CDR approaches, limiting the potential for adoption in the next generation of optical modules and future co-packaged optic interconnects that will be deployed for large data center applications.

While PAM4 data center interconnects are relatively new and now commonly use a DSP-based approach, the more simplified, cost-effective analog CDR designs will begin to see increased adoption and implementation as the demand and bandwidth for data processing increases over time. The low-power, low-latency benefits of the analog CDR approach to meet the needs of high-performance computing, AI, and cloud data center networks will play an important part as data centers of the future begin to develop and grow at scale. It is difficult to say what solutions will dominate the market, but history has shown that low power and low cost are critical. It will be interesting to see how the potential of analog CDR solutions evolves as the demand for optical communication component grows in the rapidly changing data center industry.

Timothy Vang is vice president of Semtech's Signal Integrity Products Group.

Sponsored Recommendations

Scaling Moore’s Law and The Role of Integrated Photonics

April 8, 2024
Intel presents its perspective on how photonic integration can enable similar performance scaling as Moore’s Law for package I/O with higher data throughput and lower energy consumption...

Coherent Routing and Optical Transport – Getting Under the Covers

April 11, 2024
Join us as we delve into the symbiotic relationship between IPoDWDM and cutting-edge optical transport innovations, revolutionizing the landscape of data transmission.

Constructing Fiber Networks: The Value of Solutions

March 20, 2024
In designing and provisioning a fiber network, it’s important to think of it as more than a collection of parts. In this webinar, AFL’s Josh Simer will show how a solution mindset...

Supporting 5G with Fiber

April 12, 2023
Network operators continue their 5G coverage expansion – which means they also continue to roll out fiber to support such initiatives. The articles in this Lightwave On ...