Fiber’s AI densification  

AI’s growing demands inside data centers will be the final straw that drives copper wiring out of the data center for anything other than power delivery. 
April 7, 2026
6 min read

Key Highlights

  • Innovations such as smaller-diameter glass strands and advanced packaging double fiber capacity within existing conduits, maximizing infrastructure utilization.
  • Multicore fiber technology, now moving toward large-scale deployment, increases capacity by up to eight times compared to first-generation fibers, supporting terabit speeds.
  • New standards like XPO and Hyper-Rail dramatically enhance optical switching density and inter-building connectivity, reducing physical space and power needs while boosting reliability.
  • Fiber is increasingly integrated into server and chip designs to improve speeds, reduce latency, and lower power consumption, supporting dense GPU configurations for AI training.
  • The shift from copper to fiber in data centers and between facilities is enabling more compact, efficient, and scalable infrastructure, paving the way for future innovations in network performance.

While the trend to replace copper with fiber in the data center has been underway for years for scalable, high-speed broadband networking, AI’s thirst for densely packed servers loaded with GPUs will be the final straw that drives copper wiring out of the data center for anything other than power delivery. 

OFC 2026 was the backdrop for vendors across the fiber ecosystem to demonstrate how they can pack more glass and communications into existing conduits, ducts, racks, and routes to meet the insatiable demand for AI next-generation training models as the data center industry embarks on hundreds of billions of dollars in computational infrastructure.  

The first wave of densification was underway before the AI building boom, with manufacturers such as Corning shrinking the diameter of an individual strand of glass from 250 microns to 190 microns. New packaging for cables enables users, from data centers to service providers, to double the fiber into the same physical space, creating new capacity in existing conduits and ducts without additional construction.  

But the demand for more broadband delivered through existing space has always been there.

Enter multi-core fiber

For years, the fiber industry has been working to perfect multicore fiber technology. AI has provided the impetus to move multicore beyond the trial stage and towards large-scale deployment, with a recently announced multi-source agreement (MSA) formalizing initial standards to fit four communications cores into a single fiber strand.  

The combination of these three advances – smaller diameter, better packaging, and multicore – increases the carrying capacity of a single-fiber cable or conduit by a factor of 8 compared to first-generation fiber. This medium is nowhere near reaching its current limits. Consider what is being done today with 20-year-old undersea fiber, some of the most precious communication resources on the network. Originally designed for gigabit connections, Ciena is now driving single-strand fiber to terabit speeds, with the expectation that shorter-distance fiber will now operate at 400G and 800G to migrate to 1.6TB as service providers need it.  

Exactly how fast we can go with fiber in the future seems to be more of a limitation of R&D budgets to move technology into production. CableLabs has boasted of its “hero experiment” to demonstrate 50 TB speeds from existing “in the ground” fiber, so there’s plenty of capacity to squeeze out of existing infrastructure. 

Returning to the data center proper, fiber today provides connectivity at the server, rack, data hall, data center, metro, and long-haul arenas. At the server level, chip and system designers are working to move fiber into the chip to increase speeds, lower latency, and reduce power consumption, but those efforts are still in their early stages. Fiber densification is currently rolling out from the server through long-haul, as firms move from hyperscale models to next-generation neoclouds packed to the ceiling with GPUs for AI model training.  

Fiber’s ability to simplify network design while reducing power consumption and improving reliability is showing up in very old-school data center technology, out-of-band management (OOBM). Servers once connected via copper wiring and KVM switches for boot and console access are now being managed through PON-based solutions that reduce the number of active switches by up to 90%, deliver 50% or more power savings, and simplify operations by 80%. Eliminating legacy OOBM copper, switches, and hardware is freeing up more room for GPUs. 

Hardware communications compression 

There are several areas where equipment manufacturers are taking existing equipment and drastically packing more capacity into less physical space due to AI data center demands for increased GPU density, driving more fiber everywhere, from the rack and server levels through connecting buildings and geographically distributed sites.  

Resizing optics switches has been at the top of the wish list for next-generation data center projects, since using existing technologies with high-density GPU racks resulted in a situation where standard OSPF switch racks outnumbered actual compute racks for a given physical space. For example, to service 512 GPUs across 4 racks of servers with 25.6T of connectivity would require 8 OSPF switch racks, double the space of the working compute. 

The solution announced at OFC 2026 is the new XPO standard, short for eXtra-dense Pluggable liquid cooled Optics Module. XPO increases optical switching density by a factor of four through a combination of redesigning the existing form factor to place components closer together and incorporating liquid cooling to ensure components don’t overheat. The new standard, supported through a broadly adopted MSA, changes the rack equation example above to 2 XPO racks of network connectivity supporting 4 racks of servers, reducing the AI cluster rack footprint in half.  

In addition, the redesign of components and the addition of liquid cooling increase reliability and lower power consumption, two other necessary attributes for higher form-factor densities. More equipment in the same space drives the need for fewer breakdowns and replacements; otherwise, operators will spend more time playing plug-and-replace rather than plug-and-compute.  

While XPO increases communications density within the data center, AI training models are expanding beyond a single physical facility to other facilities, pushing the need for hundreds more fiber connections between buildings and campuses. Before the AI boom, amplifier huts between data centers were designed to support around four fiber pairs, or rails, per rack, which was more than sufficient for the needs of the day. Scaling up for more pairs means more racks, power, and space, all rare commodities in huts designed for bare-bones operations. 

Scaling to support more rails means substantially increased capacity. Ciena has introduced its Hyper-Rail solution, packing in support for up to 128 pairs into a single rack, an increase of 32 times the density.  This increase in density also reduces power consumption by 75%, a very significant factor since amplifier huts weren’t designed to be full-blown data centers with power to spare.  

Pushing the edge of compute and communications density in AI data center megacomplex projects will have significant impacts on “ordinary” broadband operations in the years to come. Decades ago, a single 45 Mbps fiber connection would require 10 racks of equipment. Today’s equipment delivers up to 800G on a single fiber through a single plug-in connector in a single box, servicing dozens to hundreds of fibers, taking up a fraction of a single rack.  

AI’s wave of densification will lead to smaller, more powerful, and more power-efficient gear for service providers of all sizes in the years to come, enabling faster speeds at lower prices and new services. Proliferating edge computing into existing networks for AI, IoT, and other applications will mean that “old” gear that takes up space and power will be replaced by newer equipment built to leverage today’s densification innovations.  

In combination with multicore and other innovations, fiber is the resilient and secure medium for generations to come, unaffected by space weather or jamming, and more than robust enough to replace legacy copper twisted-pair and coaxial cable plants that are overdue for retirement.  The only thing that is unknown about fiber at this point is all the benefits it will deliver in the future. 

About the Author

Gary Bolton

Gary Bolton

vice president, global marketing

Gary Bolton is the president and CEO of the Fiber Broadband Association. 

Sign up for our eNewsletters
Get the latest news and updates