Is serial 100G ready for 400 Gigabit Ethernet standardization?

Jan. 29, 2015
The task force wrestles with whether it should focus on the present or the future.

The task force wrestles with whether it should focus on the present or the future.

Is serial 100G ready for 400 Gigabit Ethernet standardization?

The IEEE P802.3bs 400 Gigabit Ethernet Task Force started last March with a clear set of media/reach objectives:

  • At least 100 m over multimode fiber.
  • At least 500 m over singlemode fiber.
  • At least 2 km over singlemode.
  • At least 10 km over singlemode.

It appears the multimode specification will leverage a 16×25-Gbps approach. Meanwhile, the singlemode specifications have seen significant debate around whether the standard should include 4×100-Gbps approaches alongside (if not instead of) 8×50-Gbps options for at least some reaches.

"Roughly a third of the people support an 8×50G approach, roughly a third support a 4×100G approach, and a third support use of whatever is 'the right approach' and are open to using either," estimated John D'Ambrosia, chair of the P802.3bs task force, during a conversation ahead of the recent January 12 meeting. "This is a very challenging environment to drive toward consensus."

Two conflicting but widely held beliefs set the parameters of the debate:

  • Serial 100 Gbps will evolve into a practical and elegant optical approach for 400 Gigabit Ethernet.
  • But how long that evolution takes and what the approach will look like when it achieves its goal is uncertain.

Whether you're in the pro or con camp largely depends on which of these two beliefs you find most compelling, it appears.

The case for 4×100G

Much of the momentum behind serial 100G derives from work done for 100 Gigabit Ethernet. For example, IEEE 802.3ba leveraged a pair of four-lane approaches - 4×10 Gbps for 40 Gigabit Ethernet and 4×25 Gbps for 100 Gigabit Ethernet. The 4× approach proved popular with systems vendors, several of whom entered the 400 Gigabit Ethernet process assuming 4×100 Gbps would work just as well.

"It was just a gimme - yes, it'll be 4×100G," recalls Jim Theodoras, senior director of technical marketing at ADVA Optical Networking and the company's representative within the task force. "Why? Because everything's 4×25G now. We like it 4-wide."

The IEEE 802.3ba Task Force's creation of a PMA sublayer that could translate between optical and electrical interfaces of different speeds also smoothed the path toward 4×100G. It didn't matter that there won't be a matching 4×100G electrical interface - the PMA sublayer could handle rate translation.

At least in theory, a 4×100G approach also would be less expensive. A 4×100G design would require half the lasers and the electronics associated with them, proponents point out.

However, most current serial 100G technology requires potentially expensive digital signal processors (DSPs) whose power dissipation exceeds the design requirements of popular optical modules such as the QSFP28 and the smaller of the CFP variants. The size constraints such small form factors impose also pose a problem for the DSPs.

Many in the 4×100G camp, including Theodoras, acknowledge the challenge. "No one's been able to refute or disprove the numbers because they're very well thought out," he admits.

Some believe the potential to overcome such hurdles decreases if the task force abandons 4×100G now. "There's only so much money to invest in optics. And if everything goes into 8×50G, no one's really going to invest in the 4×100G," says Theodoras, explaining the concern. "That then puts off the 4×100G probably three to five years."

A time for everything

Finisar Transceiver Engineering Director Chris Cole strongly believes including current serial 100G techniques in the standard will have the same delaying effect. Cole stresses that he's all for the development of serial 100G technology. "My real disagreement with the proponents of 100G is that they want to standardize it in the IEEE, and they're just not ready," he explains.

Cole points to the fact that initial proposals for serial 100G were based on PAM16 modulation. When that format proved unequal to the task, proponents suggested PAM8. With that approach also discredited, PAM4 (along with discrete multi-tone, or DMT) has become the new contender - and, as discussed already, that approach also has unresolved issues.

"So as we are discussing this in the standard, we are learning more and more limitations," Cole says. "And the worst thing you can do for new technology is to lock down wrong decisions in a standard."

Leaving serial 100G out of the standard will allow developers more freedom for experimentation, Cole believes, adding, "I tell the proponents of 100G that I think it's great they're working on this. There are some Web 2.0 companies where standards are not that important - go install this in their data centers! Get some real experience and then come back to the standards body when you have experience, when you have real data, and we'll write a standard for it."

Meanwhile, 50-Gbps lanes should serve the needs of 400 Gigabit Ethernet well, Cole asserts. The approach can leverage work done on the 50-Gbps electrical lanes expected to be part of the standard while obviating the need for rate translation. The 400 Gigabit Ethernet Task Force also can set the stage for the IEEE's anticipated work on a 50 Gigabit Ethernet standard. Add the synergies with 25 Gigabit Ethernet and anticipated directions in server applications, and 50 Gbps represents an area worthy of investment, Cole says.

What's next?

While Cole's arguments have won over some 4×100G proponents, D'Ambrosia hasn't seen a full retreat. "I notice that the 8×50G group seems to be more people who are really looking toward the higher-end reach spectrum. The 2-to-10-km perspective," he says. "And then there's the 4×100G [camp], who I would characterize as more focused on data-center in-house connections, where they are looking at 500 m to 2 km." In other words, shorter, less demanding reaches.

D'Ambrosia faced the unenviable task of herding these cats in a single direction beginning with the January 12 meeting, by which time the task force's adopted timeline targeted baseline specifications to be in place. "I do think the group is headed for some challenges," he says. "I would say January through March I would anticipate being a very interesting period if people in January say, 'We have to get to a standard.' How do we do that?" he concludes.

STEPHEN HARDY is editorial director and associate publisher of Lightwave.

Sponsored Recommendations

Scaling Moore’s Law and The Role of Integrated Photonics

April 8, 2024
Intel presents its perspective on how photonic integration can enable similar performance scaling as Moore’s Law for package I/O with higher data throughput and lower energy consumption...

Coherent Routing and Optical Transport – Getting Under the Covers

April 11, 2024
Join us as we delve into the symbiotic relationship between IPoDWDM and cutting-edge optical transport innovations, revolutionizing the landscape of data transmission.

Supporting 5G with Fiber

April 12, 2023
Network operators continue their 5G coverage expansion – which means they also continue to roll out fiber to support such initiatives. The articles in this Lightwave On ...

Moving to 800G & Beyond

Jan. 27, 2023
Service provider and hyperscale data center network operators are beginning to deploy 800G transmission capabilities – but are using different technologies to do so. The higher...