Is 2020 the year for data center network 400G?

Feb. 13, 2020
With the necessary optical transceivers finally arriving on the market, this year just might be the dawn of the 400G era – although just barely and only if module availability intersects correctly with network operator plans, say analysts.

Hyperscale data center operators such as Alibaba, Amazon, Facebook, Google, and Microsoft have embraced 100 Gigabit Ethernet (100GbE) for their data center networks. However, several of these companies have indicated that they require even higher transmission rates – both for intra- and inter-data center communications – in the near term. While some will adopt 200-Gbps technology, others plan to go straight to 400GbE. And with the necessary optical transceivers finally arriving on the market, this year just might be the dawn of the 400G era – although just barely and only if module availability intersects correctly with network operator plans, say analysts.

400GbE for inside the data center

Data center network operators who don’t want to design their own modules (and there already are some who do) will have two types of 400-Gbps modules to evaluate for their requirements: PAM4-based transceivers that conform to IEEE 802.3bs specifications and coherent-enabled 400ZR devices built to the relevant OIF Implementation Agreement. The former likely will see its greatest use inside the data center, with the latter finding a home in data center interconnect (DCI) applications – at least to start.

IEEE 802.3bs offers a wide range of specifications for different Ethernet network requirements:

  • 400GBASE-SR16, which covers at least 100 m over multimode fiber via 16 transmit and another 16 receive fibers, each transmitting at 25 Gbps (a study group has formed to investigate whether OM5 fiber and shortwave WDM technology could lessen the number of fibers required)
  • 400GBASE-DR4, for at least 500 m over single-mode fiber using four parallel fibers in each direction with 100-Gbps transmission on each fiber; the decision to target 100-Gbps transmission was the subject of spirited debate
  • 400GBASE-FR8, which uses eight-wavelength WDM to treat reaches of at least 2 km over a single-mode fiber in each direction
  • 400GBASE-LR8, which is similar with -FR8 except the reach is extended to at least 10 km over single-mode fiber.

Proprietary variants of these specifications have already been announced. For example, Finisar (now part of II-VI) demonstrated an extended reach version of the LR8 (dubbed eLR8) at ECOC 2018.

Regardless of which specification a 400GbE a transceiver follows it likely will come in one (or more) of four form factors: CFP8, OSFP, QSFP-DD, and modules described in the Consortium for On Board Optics’ (COBO’s) initial specification set. With the CFP8 finding a place in service provider networks and COBO applications in their infancy, OSFP and QSFP-DD will prove the most ubiquitous form factors. Common wisdom has the former as the favorite of Google (who is a member of the multisource agreement) and the latter seeing the majority of deployments elsewhere. Jim Theodoras, vice president of research and development at optical module vendor HG Genuine USA, told attendees of a recent Lightwave webcast on data center strategies that 400GbE modules should find use in connecting top-of-rack switches to intermediate switches and those intermediate switches to routers.

And at least some of those deployments have begun, although not necessarily smoothly. According to LightCounting, Amazon began deployments of DR4 modules (good for 4x100GbE breakout applications) last year but slowed the initiative due to performance problems with the modules’ PAM4 DSP. Meanwhile, Google rolled out custom 2x200GbE devices last year as well, the market research firm said.

However, Dale Murray, a LightCounting analyst, indicated in a recent email exchange that he’s not expecting a huge ramp for standard 400GbE modules this year. “None of the largest data center operators are broadly deploying 400GbE as point-to-point 400GbE MACs communicating via optical modules,” he wrote. “The limitations of the 8x50-Gbps module electrical interface translates into a small switch radix; just 32 for a Tomahawk 3. The Tomahawk 4 is announced but will take some time to see deployment in 64-port switches. It will take a 100-Gbps SerDes ecosystem to cause a broad ramp of 400GbE and that’s not likely before 2022.”

That said, the deployments at Amazon and Google should continue this year, Murray believes. For example, LightCounting expects what Murray termed “healthy volumes” of DR4 shipments beginning in the second half of 2020. Meanwhile, he expects Google’s use of 2x200GbE devices to triple this year.

Andrew Schmitt, founder and directing analyst at Cignal AI, is in general agreement with Murray. “I think you’re going to see more people moving into production with DR4 and FR4 400GbE modules, and they’ll be single-wavelength 100G stuff. The single-wavelength 100G is probably more interesting, in terms of the immediate applications,” he told Lightwave. “The problem is that the hyperscale guys have all deviated to 200G or 2x200G types of solutions that allowed them to leverage cheap 100G optics and PAM4 chips and get better performance per dollar than going to the 400G solutions that were being proposed. So, all in all, there’s not a lot of interest from any of the hyperscale guys in standard 400GbE optics in 2020.”

That said, there may be some hope for standard 400GbE data center modules in 2020. “The only exception may be Amazon. I think they may be waiting for Intel to go to production with the DR4, and that’s a possibility,” Schmitt allowed. “But Microsoft is actually gating their usage of 400GbE clients by the availability of 400ZR. So the data center interconnect is driving the decision on the client-side optics.”

400ZR for DCI

Microsoft may not have to wait long to make their client-side 400G decisions. Demonstrations of 400ZR devices are expected to generate significant buzz at OFC 2020. Such modules incorporate coherent transmission technology into data center friendly form factors at a price point that’s expected to prove tolerable for hyperscale operators (and others) for DCI. And, therefore, the resulting demand should translate quickly into sales; Andrew Schmitt, founder and analyst at Cignal AI, told attendees at an OIF meeting last year that shipments of 400ZR modules should reach 20,000 in less than a year, although applications other than data center networks would help drive such sales.

Of course, any deployment momentum would await the availability of modules. Two companies have announced that they’re nearing production of 400ZR-capable transceivers. NeoPhotonics announced in January that it has begun sampling its 400ZR ClearLight module in the OSFP form factor. (It also announced 400G capabilities in a CFP2-DCO configuration, which likely would find use in service provider networks.) Inphi, meanwhile, stated in December 2019 that it had launched delivery of engineering samples of its COLORZ II 400ZR module in the QSFP-DD form factor.

The timing of 400ZR module availability naturally will drive deployment timelines. Murray and Schmitt agree that if there is to be quick ramp of 400ZR deployments within the hyperscale community, Microsoft (the prime customer for Inphi’s original COLORZ transceiver to enable IP over DWDM architectures) will drive it. “Certainly, the 400ZR should be a no-brainer replacement for COLORZ, if it’s doing what it’s supposed to be doing,” Schmitt said. “So I’ve got to believe that Microsoft would want to make the switch as soon as possible. But they haven’t specifically said what they’re going to do.”

Thus, whether speaking about inter- or intra-data center applications, it appears that the plans of one or two hyperscale data center operators will determine whether this year will see 400G deployments begin to ramp. And those plans don’t yet appear confirmed.

Sponsored Recommendations

Scaling Moore’s Law and The Role of Integrated Photonics

April 8, 2024
Intel presents its perspective on how photonic integration can enable similar performance scaling as Moore’s Law for package I/O with higher data throughput and lower energy consumption...

Coherent Routing and Optical Transport – Getting Under the Covers

April 11, 2024
Join us as we delve into the symbiotic relationship between IPoDWDM and cutting-edge optical transport innovations, revolutionizing the landscape of data transmission.

Data Center Network Advances

April 2, 2024
Lightwave’s latest on-topic eBook, which AFL and Henkel sponsor, will address advances in data center technology. The eBook looks at various topics, ranging...

Constructing Fiber Networks: The Value of Solutions

March 20, 2024
In designing and provisioning a fiber network, it’s important to think of it as more than a collection of parts. In this webinar, AFL’s Josh Simer will show how a solution mindset...