OIF takes a holistic look network interoperability

Oct. 10, 2024
The organization’s ECOC multi-vendor interop event showcased new innovations to maximize speed and conserve power.

The Optical Internetworking Forum (OIF) is central to optical interoperability effort. It leads several projects to drive interoperability for various optical networking and Ethernet service configurations.

During the ECOC 2024 trade show, OIF led an interoperability demonstration featuring live collaboration between 34 member companies.

Nathan Tracy, president of OIF and technologist, system architecture team, and industry standards manager at TE Connectivity, said the organization is taking a broad approach considering multiple perspectives.

“While the OIF oversees multiple component pieces or separate technology tracks, the key thing is to look at it holistically and how we are collectively addressing the hottest issues in the industry right now,” he said. “We can dive into each one as an individual track, but the important part of the message is these guys are solving the DCI problem, next-gen data rates, power, management and efficiency issues, and power consumption issues related to an AI architecture.”

He added, "It’s vital to take a step back, look at the holistic picture, and see the impact that OIF has in changing the world.”

Data center consistency

The role of the data center continues to evolve, driven by a slew of new AI and cloud-based applications.

Cloud providers must figure out how to connect multiple data centers to operate as one. OIF has been developing 400 ZR and 800 ZR optical specifications to address these issues.

As a 400Gbps interface based on single carrier coherent DP-16QAM modulation, 400ZR is an interoperable ow power DSP supporting absolute (Non-Differential) phase encoding/decoding and a Concatenated FEC (C-FEC) with a post-FEC error floor <1.0E-15.

Similarly, 800ZR is an interoperable, 800 Gbps interface based on single-carrier coherent DP-16QAM modulation, low-power DSP supporting non-differential phase encoding/decoding, and OFEC with a post-FEC error floor.

“The ZR work we are doing relates to connecting data centers with a high enough bandwidth link and speed so you can benefit from a force multiplier,” Tracy said. “You are not limited to the scale of an individual data center, but the collective scale because of the DCI links that bring those together.”

He added that OIF focuses on “the interoperable nature of the ZR specification.”

Coherent challenges, opportunities

OIF’s ZR work comes amidst the challenge and opportunity for coherent optics.

Coherent optics are typically used for ultra-high bandwidth applications ranging from 100 Gigabit to 1 Tbps.

While coherent optics offer various benefits, one key challenge is that every vendor has baked their flavor into their platforms, making interoperability challenging.

“Coherent has been this long-haul workhorse for the industry, but it comes at a price,” Tracy said. “You don’t have interoperability because each provider of coherent optics has a secret sauce that makes it work, and the secret sauce keeps it from interoperating.”  

OIF has created interoperability, which has helped create coherent optics that are face plate pluggable with a common form factor.  

“We can pull out supplier number one and plug in supplier number two, and it works,” Tracy said. “On the other end of the link, we can have supplier number 3 talking to supplier number 2, and it all works.”

But the organization is looking forward to the next generation. One of its latest projects is 1600ZR, which started earlier this year.

The proposed 1600ZR+ standard is focused on supporting 1.6 Tbps of data across hundreds of kilometers of optical fiber. OIF’s 1600ZR initiative follows previous work standardizing the 400G 400ZR and the 800G 800ZR coherent pluggable optics.      

OIF’s 1600ZR project focuses on developing a power-optimized solution for a multi-vendor interoperable 1600 Gbps coherent optical interface, focusing on Data Center Interconnect (DCI) scenarios. To complement the 1600ZR project, it introduced an interoperable 1600ZR+ project.

The decision to address a ZR+ standard is the first step for the OIF. Until now, only the OpenZR+ Multi-Source Agreement (MSA) and the OpenROADM MSA developed interoperable ZR+ optics.

“Now, we have a project to work on 1600 ZR,” Tracy said. “While 1600ZR was not part of our ECOC demo, we continue to build on this model of supporting this industry need.”

Tracy emphasized that developing new interoperable standards involves a broad range of players in routing, switching, optical, and testing.

“It takes a village to solve these problems,” he said. “You must have the optical and transport guys and test and measurement guys. We have pulled together this broad ecosystem that are all members of the OIF.”

AI/ML’s influence

Unlike the previous generation, which was mainly for locating equipment, today’s data center requires greater power driven by new bandwidth from demanding data workloads.

A recent Dell’Oro Group report revealed that Data Center Physical Infrastructure (DCPI) revenue growth accelerated for the first time in five quarters in the second quarter of 2024, as physical infrastructure deployments to support accelerated computing workloads began to materialize more than expected.

Earlier, in the conventional data center application space, providers would have two top-of-rack switches connecting 40 servers in the rack.

At that time, the top of the rack switch would be turned on, telling a person what time a movie would start or how much to pay for a new pair of shoes.

Tracy said now the configuration is different. “Now, we gutted the rack, and we don’t have all those servers there,” he said. “We have a bunch of servers, but they are connecting to a bunch of GPUs, and all of those accelerators are connected to other accelerators, so we have a fabric of fabrics.”

The task comes in on the front-end switch and the front-end switch to the CPU. The CPU then says it will need a lot of GPUs to solve the issue, and it assigns the task to a bunch of GPUs. These GPUs are then transferred to a server.

“We’re trying to get GPUs to act like one giant compute unit,” Tracy said. “This means we need a low latency link between all the GPUs, but it needs to be very high bandwidth, which gets us to CEI 100 and CEI 200G data rate projects.”

Addressing powering issues

Of course, as the demands in the data center rise, concerns over how to reduce power continue to rise.

With all these GPUs consuming a lot of energy, the question is how close you can get to them in a rack if they consume that much power.

“There’s a limit to the density because we can’t fit all in one rack, and GPUs are getting further apart,” Tracy said. Maybe these links need to be optically connected, so we increased the power consumption by taking all these links and converting them from electrical to optical links.”

To address this, the OIF held an energy efficiency demonstration at ECOC. This demonstration illustrates how to achieve energy efficiency with both copper links and optical links.

On the copper side, which is ideal for local connections, the OIF proposal showed how the data rates increase, and pure electrical link reach becomes shorter. OIF said that reach can be extended with additional DSP capability and with the addition of retimers, trading off additional latency and power.

With optical links, the addition of E-O power can convert the electrical signaling to optical and travel without needing additional retimes to restore the signal. Also, extra power and latency can be saved if the electro-optical conversion is co-packaged with the ASIC.

“Our energy efficiency interconnect demo showed how we’re working on next-gen higher energy efficiency for electrical and optical links simply because there are so many links in this fabric of fabrics,” Tracy said.

Sustainability is critical in this process. However, even if sustainability wasn’t a factor, accessing electrical power remains a question as cloud and data center providers look to scale.

“There’s a lot of green and sustainability issues to support,” Tracy said. “But even if we didn’t care about that, the question is how do you keep supplying more and more power on the growth trajectory of the power consumption in these data centers.”

ECOC’s demo collaboration
OIF led an interoperability demonstration featuring a live collaboration between 34 member companies at the ECOC 2024 exhibition.

This highlighted advancements in performance, efficiency, and capacity in response to the needs of future-oriented data centers, AI/ML technologies, and disaggregated systems.

Tracy said that the number of members who participated in the live demo during ECOC shows the importance of its work.

“While the location of ECOC was not in the center of the AI and cloud operator world, we were still able to have 34 members who were dying to be part of this demonstration because it is so impactful,” he said.  

The demonstration spotlighted interoperability innovations broken out into four tracks: coherent optics (800ZR and 400ZR and multi-span optics), Energy-Efficient Interfaces (EEI) and co-packaging, 224G and 112G Common Electrical I/O (CEI). It also included the OIF’s Common Management Interface Specification (CMIS)—all pivotal for shaping the next decade of industry standards.

Tracy pointed out significant involvement in the 112G CEI demonstration. “We have clearly shown year after year that 112 linear interoperability works, and we showed it again,” he said. “We also had CEI 224 gigabit linear demos, but also CEI has 224 chips to module as well as LR and MR demos.”

The group was able to conduct demonstrations of 1.6 Tbps held over passive copper cables and 1.6 Tbps active copper cables.

“We’re enabling that next generation of cloud, but we have to deliver higher data rates and optimize for power and latency,” Tracy said. “That was reflected in the CEI track and the EEI track.”

The EEI track focused on power efficiency and examined how the OIF could combine its optical and electric focus.

“We had to look at how to power optimize for reduced power and how to get higher density and lower latency, which is all driven by the AI fabric,” Tracy said. “If the AI fabric isn’t capable of higher power, lower latency, you’re not going to meet the need, and you won’t be able to solve the challenges AI is solving.”

Calling for common management

Speeds and feeds are just one part of what OIF is driving.

The organization continues to evolve its network management capabilities, namely Common Management Interface Specification (CMIS).

CMIS provides a standard protocol between the optical module and the copper cable that plugs into the host equipment, such as the switch or router.

“We’re creating this plug-and-play operation between the modules and cables and the equipment leveraging those plug-in solutions,” Tracy said. “Someone who didn’t necessarily grow up in the business would not appreciate it and assume if I have an OSFP module and I plug it into an OSFP socket on a switch, it will plug and play, but no, they don’t.”  

Tracy added that before CMIS, many optical switching and component vendors had been developing their management tools.

“CMIS allowed for an industry standard interface that all pluggables and all things that pluggables go into would have a common set of expectations,” he said. “Now, as the plugins become more powerful and capable, we’re introducing the capability for the host equipment to interrogate the module. The module can say here are all the functions it is capable of, and the host or switch wants you to operate in this mode.”

With CMIS being a key priority for OIF, the industry organization continues to evolve its management standards.

In September, the OIF unveiled the latest Common Management Interface Specification (CMIS) Implementation Agreement (IA) (version 5.3), the External Laser Small Form Factor (ELSFP) Pluggable CMIS IA, and the Formfactor Specific Hardware Management (CMIS-FF) IA.

Gary Nicholl, OIF Board Member and Physical and Link Layer (PLL) Working Group – Management Co-Vice Chair, and Cisco, said in a release announcing the new IA that “these new standards not only enhance the functionality of modules but also streamline the integration process, reducing the time to market for new technologies.”

For related articles, visit the Business Topic Center.
For more information on high-speed transmission systems and suppliers, visit the Lightwave Buyer’s Guide.
To stay abreast of fiber network deployments, subscribe to Lightwave’s Service Providers and Datacom/Data Center newsletters.

About the Author

Sean Buckley

Sean is responsible for establishing and executing the editorial strategies of Lightwave and Broadband Technology Report across their websites, email newsletters, events, and other information products.

Sponsored Recommendations

On Topic: Fiber - The Rural Equation

Oct. 29, 2024
RURAL BROADBAND:AN OPPORTUNITY AND A CHALLENGE The rural broadband market has always been a challenge for service providers. However, the recent COVID-19 pandemic highlighted ...

The Road to 800G/1.6T in the Data Center

Oct. 31, 2024
Join us as we discuss the opportunities, challenges, and technologies enabling the realization and rapid adoption of cost-effective 800G and 1.6T+ optical connectivity solutions...

Advances in Fiber & Cable

Oct. 3, 2024
Attend this robust webinar where advancements in materials for greater durability and scalable solutions for future-proofing networks are discussed.

High-Speed Networking Event

Oct. 23, 2024
A Multi-Day online learning event crafted for optical communications professionals specializing in high-speed networking solutions Date: November 12-14Platinum Sponsor: AFLGold...