With voracious appetites for applications such as artificial intelligence (AI) and machine learning (ML), hyperscale servers have moved to 25 Gigabit Ethernet (GbE) connectivity and are transitioning to 50GbE, 100GbE, and beyond. The bandwidth demands of hyperscale data centers continue to grow exponentially:
- International Data Corp. (IDC) predicts the amount of data created or replicated by 2025 will more than double—from 80 zettabytes (80 ZB) in 2021 to 180 ZB in 2025.
- Gartner predicts more than 50% of enterprise-generated data by 2025 will be outside central data centers.
A panel discussion at the Ethernet Alliance’s Technology Ethernet Forum (TEF), “New Applications Driving Higher Bandwidths,” detailed the growth. The implications for cloud service providers (CSPs) and the Ethernet ecosystem are profound. The cloud’s need for higher speeds and advanced capabilities is fundamentally changing the architecture of the data center. And it is driving exciting Ethernet innovations.
CSPs pay for the latest and greatest, high-speed Ethernet technologies to avoid bottlenecks at almost any cost. Their risk appetite in the last 10 or 15 years has exploded in terms of acquiring the leading-edge capabilities that enable them to achieve their goals.
The architecture to deliver advanced capabilities and take advantage of the 200GbE, 400GbE, and higher speeds will have to look completely different than the architecture built for 100GbE and the speed generations below.
At the networking speeds that are coming to the cloud, simply sending traffic up a high-speed serial interconnect to a general-purpose host creates a significant inefficiency in the compute space. CSPs are focused on maximizing the general-purpose compute, where end-user developers create their applications to run. As a result, infrastructure functions such as switching, firewalls, storage, isolation, and security are being driven out of the central processing units (CPUs) and into the network edge, so that cloud providers can flexibly commit more CPU cycles to selling services.
CSPs are operating in an increasingly virtualized environment, and that transition is fueling a fundamental transformation of data-center architectures and across the Ethernet ecosystem.
The web-scale behemoths all have different concepts of how the network should work, and they have the resources to realize their visions. Customization is clearly the way of the CSPs’ world.
For example, the physical medium that makes the most sense for each application is going to be different as Ethernet strengthens its position as the world’s most broadly used connectivity fabric. The landscape of physical media in use is only going to get more complex and diverse, as CSPs flexibly allocate resources for, say, applications that are more compute intensive versus those that are more storage centric. This means that smarter endpoint technologies in edge devices will be required to appropriately move the applications around within the cloud environment.
Hardware vendors who aim to build in a general-purpose fashion for maximum flexibility face a terrific challenge in accommodating diverse ecosystems and unique customer needs. At the same time, the industry also has to be realistic about coming to some semblance of consolidation on solutions. Otherwise, no vendors would be able to build all the product variations that their cloud customers want.
That, in turn, sets up a challenging scenario for the standards-development organization (SDO) for Ethernet, IEEE 802.3 Ethernet. The SDO is critical to fostering cohesiveness and alignment in the ecosystem. At the same time, the SDO must stay far ahead of the technology’s horizons because the highest-end customers like CSPs are so hungry for higher and higher speeds that they are pressured to implement the first thing off the shelf that works. That opens the door for non-compliant technologies which can throw wrenches such as market confusion and interoperability issues into the space.
Paving the way forward
A sampling of the several efforts underway within the IEEE 802.3 Ethernet Working Group illustrates the balance to be struck in terms of supporting both today’s and tomorrow’s needs of cloud service providers and the vendors that serve them:
- The IEEE P802.3cw 400 Gb/s over DWDM Systems Task Force activity is focused on the need to interconnect distributed data centers with connections of at least 80 km or where fiber availability is such that multiple instances of Ethernet over dense wavelength division multiplexing (DWDM) systems are required.
- The IEEE P802.3db 100 Gb/s, 200 Gb/s, and 400 Gb/s Short Reach Fiber Task Force is concentrated on requirements around higher data rates and density using lower-cost optical solutions for the shortest links in data centers (up to 100 m).
- The IEEE P802.3df and P802.3dj 200 Gb/s, 400 Gb/s, 800 Gb/s, and 1.6 Tb/s Ethernet Task Forces are looking at the next-generation bandwidth needs of cloud-scale data centers, as well as internet exchanges, co-location services, content delivery networks, wireless infrastructure, service provider and operator networks, and video-distribution infrastructure.
Specifications are being defined three or four technology generations ahead of what is being deployed—at the same time that Technology Generation B is being developed, Generation A is going into production, C is being defined in the IEEE 802.3 working group, and D is being scoped. It is a constantly shifting future that the global Ethernet ecosystem is constantly pushed to simultaneously enable and envision. It’s not easy, but this is what the Ethernet community must do and will do.
As this ongoing Lightwave blog series illustrates, CSPs are one aspect of the story of Ethernet’s unfolding adoption across application spaces and growing diversity in its rates and implementation.
The Ethernet Alliance’s Ethernet Roadmap is the industry’s only publicly available Ethernet guide sharing key underlying technologies, current and future interfaces and the growing range of application spaces where Ethernet plays a fundamental role. As the voice of Ethernet, the Ethernet Alliance is ideally positioned to drive conversations and share the latest insights through this popular resource. Already, the Ethernet Alliance is planning for future iterations of the roadmap, and we encourage CSPs to engage in the process of tracking the technology’s ongoing evolution.
Sam Johnson is the HSN subcommittee co-chair for the Ethernet Alliance. He also is the manager of the Link Applications Engineering team within Intel’s Cloud Networking Group (NCNG). Sam’s role is focused on defining new features and implementation details for link behavior in current and future NCNG products while supporting debug and interoperability testing.
Peter Jones is the chair of the Ethernet Alliance. He is a Distinguished Engineer in the Cisco Networking HW team. He is active in IEEE 802.3. He works on the evolution of technology to add value to physical infrastructure and make technology consumable.
Steve Rumsby represents Spirent as a Principal Member of the Ethernet Alliance. He is currently serving as product development lead, next generation Ethernet and WiFi solutions for Spirent’s Cloud and IP business unit. He has over two decades of architecture and design leadership experience in the telecommunication industry.
Carl Wilson serves as the secretary for the Ethernet Alliance. He is currently an Ethernet Silicon and Technology Planner for Intel’s Ethernet Products Group. For last 10 years, he has been focused on marketing Intel’s Ethernet controllers and Ethernet IP for SoCs for worldwide market segments including IoT, automotive, client, workstation, data center, communications, storage, and cloud computing.