
Emerging AI architectures are driving increased optical port requirements in the back-end network as compared to traditional compute or
front-end networks. Traditional compute servers have been connected via copper to top-of-rack switches, each of which have a small number of optical uplink ports to the leaf switch layer; the majority of the optical ports in the network reside in the switch-toswitch links. In AI clusters, there are typically multiple GPUs per server, or compute tray, each of which has connectivity to the first layer switches in the scale-out network via pluggable transceivers.
This dramatically increases the number of optical ports per server or compute rack. Optical ports in a server rack grow from
a handful of uplink ports at a top-of-rack switch to 72 transceivers each with dual-DR4 optics across the compute trays; as a result, each compute rack has to support over 1000 fibers. With a 1:1 over-subscription, when all of these ports reach the first layer of switches,
they are then duplicated as uplinks to the second layer of switches, further increasing the volume of optical ports required to build an AI scale-out network. These network designs demand significantly higher density fiber optic cabling and connectivity solutions to address the increase in optical ports. Along with density requirements, the need to accelerate deployment speeds over traditional methodologies has become critical as the race to turn up AI clusters quickly escalates.
The MMC connector platform delivers a multi-fiber Very Small Form Factor (VSFF) connectivity solution that not only achieves significant density as compared to MPO and LC based cabling infrastructure (see article on “Solving Data Center Densification Problems with a Novel Very Small Form Factor Connector”), but also each component of the MMC connector platform is designed to operate as a system that improves the data center speed of deployment life cycle from structured cabling design to installation and testing.
Download this OnTopic now!

