Co-Packaged Optics and the AI data center: From skepticism to strategic adoption
Co-packaged optics (CPO) has become one of the most talked-about technologies in the AI data center world. Vendors and standards bodies aggressively position CPO as the answer to AI’s bandwidth, latency and power crises, yet many users remain in a quandary, wondering whether they need CPO; whether it is mature enough, and whether adopting CPO will create more operational risk than performance benefit. CIR has recently completed a study of CPO’s prospects given the current mix of technical prospects and user skepticism.
CPO demos at OFC and ECOC, and perhaps these days GTC, are impressive. Yet, if you walk into most data centers today—outside of a small number of hyperscale facilities—you will find few CPO deployments. What you find if you talk to data center managers is a mix of cautious curiosity, engineering-led skepticism, and perhaps an industry quietly preparing for a technology transition that may take a decade to fully play out.
However, CPO is not just a technology story. It is a story about user psychology, risk tolerance, data center culture, and the changing relationship between infrastructure buyers and their suppliers.
Why CPO is back in the spotlight
CPO is not new. The concept – packaging photonics and electronics close together – goes back years, to early work by IBM on supercomputer interconnects and to “flyover” interconnect concepts. What has changed is the emergence of AI as the defining workload of modern data centers. Before the AI boom, CPO was pitched as a broadly applicable innovation—useful for sensors, telecom, high-performance computing, and data center interconnect. This “shotgun” positioning generated early excitement but ultimately failed to sustain demand.
AI-driven demand is more focused. CPO is framed as a solution to specific and real problems in AI data centers: electrical interconnects are becoming too lossy and too power-hungry as the industry moves toward 112G and 224G SerDes, and as switch ASICs approach 51.2T and beyond. In that context, CPO’s value proposition becomes easy to explain. By shortening electrical paths and moving optical interfaces closer to the ASIC, CPO promises better energy efficiency, improved bandwidth density, and a path toward future ultra-high-radix switches (51.2T to 204.8T). In the table below we chronicle the consensus view on where CPO can best show off its apparent advantages.
|
Advantages Currently Claimed for Co-Packaged Optics (CPO) |
|
|
Power efficiency |
Shorter traces among ASICs and optics, reduction in DSP power and lower I/O drive requirements. These may result in a 20–40% reduction in interconnect power at 800G to 1.6T and ~5–15W saved per 800G port plus a 200–500W reduction per switch. These savings aggregate to multi-MW levels at hyperscale, with thousands of switches. These factors also reduce cooling needs |
|
CAPEX |
Fewer power distribution upgrades and less need for cooling plant expansion. Also, CPO may enable higher rack density and fewer switches producing the same aggregate bandwidth |
|
OPEX |
Lower costs for electricity and cooling are possible |
Source: CIR
This seems too good to be true, and this may be the case. The mistrust of potential CPO users is linked to CPO being initially more complex than pluggables and the front panel design of conventional data center boxes being simpler than CPO equivalents and easier to service. Network managers from a couple of decades ago would be shocked that pluggability could be so easily disposed of. Some level of confidence might be re-stablished by pointing out that the disappearance of pluggability may also lead to fewer field failures.
And for all the talk of CAPEX savings, early generation CPO switches may cost more than conventional pluggable switches. Caveat emptor!
How potential CPO users really feel: The hyperscalers tale
Among data center operators, there is a divide between fascination and distrust about CPO. Operators acknowledge that CPO appears technically compelling but also see CPO as something that could turn out as an operational headache. CIR’s research suggests that even within traditional data center market (i.e., non-hyperscale), awareness of CPO remains limited. Outside hyperscale environments, the average data center manager may know little about CPO. The same can be said of established pluggable transceiver suppliers as well.
From an engineering perspective, CPO seems to matter most when the longer-term goal is scaling to 102.4T ASIC. Then power becomes the limiting factor, with hints that “extreme” port densities will be required. This tends to mark CPO as a hyperscaler technology. Indeed, Microsoft, Meta, Google, and Amazon, are already running internal trials of CPO. It’s not experimentation for its own sake. They are looking for anything that can save them from higher power budgets in the future. The hyperscalers see CPO as part of a broader architectural shift: photonic fabrics, denser racks, and the possibility of scaling AI clusters and racks beyond what copper can support.
In this sense, hyperscalers view CPO not as an isolated technology upgrade, but as a key enabling element of the next AI infrastructure generation. Also, unlike the owners/managers of enterprise data centers and more modest cloud and edge data centers, hyperscalers' willingness to adopt is reinforced by their already accepting nontraditional supply chains. Unlike enterprises and smaller data center operators, hyperscalers are comfortable with vendor lock-in if the performance gains justify it. They have procurement leverage, engineering teams capable of designing around supplier weaknesses, and, in many cases the ability to demand custom solutions. For hyperscalers, the question is not “Should we deploy CPO?” but “How quickly can we industrialize it?”
Enterprise and colocation operators: “Show me reliability”
CIR notes that outside hyperscale environments there is little evidence of CPO deployments today, despite references to smaller users in the trade press. Even if some small-scale CPO deployments do exist, they are not widely visible or influential enough to make much difference. Enterprise and colocation operators and other smaller operators have a fundamentally different culture to the hyperscalers.
They are not building large proprietary platforms, and they rarely have the engineering staff to run complex optical integration programs. Their attitudes toward CPO are shaped by a different set of priorities: interoperability, multi-vendor supply chains, and field serviceability. This will create different adoption curves. Hyperscalers may jump early, while the rest of the market waits for “proof,” standardized interfaces, and a mature ecosystem. In effect, hyperscalers may act as the industry’s test lab, and enterprises and other smaller operators will be the eventual volume market.
The “Bridge Technology” psychology: LPO and NPO as comfort zones
Meanwhile, one of the most important trends shaping user attitudes is the rise of “steppingstone” solutions. CIR emphasizes that cautious potential CPO users will not leap directly from pluggables to full CPO. Instead, they will adopt intermediate architectures such as NPO and LPO, which provide some benefits of reduced power and improved signal integrity without fully sacrificing modularity.
Operators currently move gradually because they don’t trust early-generation CPO manufacturing yields, thermal behavior, or repair/maintenance models. NPO and LPO allow them to experiment with shorter electrical traces, reduced DSP overhead, and emerging electrical interfaces such as CEI-112G and CEI-224G without rewriting operational playbooks overnight.
LPO appeals strongly to operators focused on power and latency. By removing the DSP, LPO promises lower power consumption and reduced latency, valuable for AI. But it also introduces constraints: shorter reach, stricter host requirements, and tighter signal budgets. NPO, offers proximity benefits without full co-packaging, reducing risk in thermal and manufacturing complexity.
These bridge technologies matter because they will shape the pace of CPO adoption. CPO is “the endgame,” but for enterprise operators, CPO is seen as “next decade technology.” Many believe that even if CPO becomes important, intermediate steps may provide benefit without the risks of full co-packaging.
Thermal reality and the return of pluggability
The most frequently cited technical barrier to CPO adoption is thermal management. Temperature instability causes wavelength drift, accelerated aging, and performance degradation. The very act of bringing optics closer to the ASIC introduces heat-related risks. Optical components—especially lasers and photonic ICs—have strict temperature requirements. The CIR CPO report says thermal management as one of the biggest factors discouraging CPO adoption today.
However, the most interesting aspect of CPO lasers is not the paradox that they come with their own thermal concerns, but that they might be bringing back pluggability through the back door. As things now stand External Laser Small Form Factor Pluggable (ELSFP), driven by OIF implementors’ agreements, represents a compromise between full CPO integration and traditional modular optics. The logic is simple: lasers fail, lasers degrade, and lasers are best kept in cooler areas. External laser sources allow replacement without disturbing the switch ASIC package – the return of pluggability in a sense.
From a user attitude standpoint, ELSFP is appealing because it addresses the “repair anxiety” that haunts CPO discussions. Operators may not mind losing the ability to swap out an optical engine if they can at least swap out the laser source. This alone makes the CPO model feel less fragile. However, the external approach also introduces new risks, such as insertion loss and the possibility that one laser failure could affect multiple channels. ELSFP and external laser architecture are likely to play a major role in easing adoption.
Vendor influence: Broadcom and NVIDIA shape perceptions
User attitudes toward CPO are also shaped by the credibility of its champions. Here we note that Broadcom and NVIDIA have emerged as the influential suppliers driving CPO narratives. Broadcom’s early work with its Bailly platform established it as a reference point for switch ASIC integration. NVIDIA, meanwhile, has brought CPO into mainstream AI conversation by integrating it into its Spectrum-X and Quantum-X platforms and then showcasing these systems. publicly.
NVIDIA’s approach is particularly interesting since it reflects an awareness of operator concerns. Its architecture includes detachable optical sub-assemblies, implying a partial modularity model. In effect, NVIDIA appears to be designing CPO systems with manufacturability and replaceability in mind, acknowledging that pure co-packaging without serviceability would be a hard sell.
Vendor strategies matter because users often will not adopt new infrastructure technology until they believe a “safe vendor path” exists. In networking, trust is often brand driven. Operators will tolerate risk if they believe the supplier can absorb it through engineering support and long-term product stability. Broadcom and NVIDIA are therefore not just suppliers—they are CPO confidence engines.
The supply chain anxiety: “Will this become another lock-in trap?”
That said, CPO changes procurement in ways that make operators uncomfortable.
With pluggables, operators buy optics from multiple suppliers and treat them as interchangeable commodities. CPO threatens this model. If optics are integrated into the switch package, the operator becomes dependent on the switch vendor’s packaging ecosystem and replacement policies. Instead of purchasing interchangeable modules, customers may have to source integrated CPO systems from a single vendor or partnership. Another issue is that with CPO there is the possibility that an optical failure might require replacement of a board, a line card, or even an entire switch assembly. This is not the kind of truth that data center managers want to hear. CPO violates the instincts of operations teams. Many will interpret CPO as “vendor lock-in disguised as innovation.”
This is why interoperability work at organizations like the OIF and the Advanced Photonics Coalition matter. Users are not only watching for performance—they are watching for ecosystem maturity and multi-vendor credibility.
How adoption will actually evolve: A three-stage pattern
CPO market revenue will grow as operators gain confidence in equipment testing, supply chains, cable management, and cooling. As with all such technologies, the growth of CPO, if it is successful, will be non-linear.
From skepticism to acceptance (2026–2028): CPO will be a plaything for hyperscale AI clusters not a mainstream networking technology. The continued scaling of AI fabrics will force more serious evaluation of CPO. The limiting factor will be unresolved issues in lasers, packaging yield, thermal design, and testing.
From acceptance to dependency (2029–2032): The next phase will shift from experimentation to reliance. As AI clusters scale toward 100T-class, CPO becomes the only technology to date that can do the job. User attitudes may shift from “we are evaluating” to “we cannot scale without it.”
From dependency to optimization (2032–2035): Once CPO is mainstreamed, the conversation changes. It will now focus on which vendor’s CPO architecture is better. This will be the period when non-hyperscale operators begin adopting CPO in meaningful volume,
We are aware our numbers for CPO may seem optimistic to some. The doubters recall networking technologies that have long disappeared without a trace – FDDI springs to mind. One important factor, CIR believes, is the evolution of AI itself. It is possible that smaller language models could reduce the need for high-speed interconnects, leaving CPO as a niche technology. If AI workloads become distributed or less bandwidth-hungry, the urgency of CPO may also decline.
Yet another wildcard is copper in the rack. NVIDIA’s continued use of copper for NVLink reinforces the industry’s long-standing pattern: fiber is adopted only when copper fails. CIR notes that an optical takeover is not in the stars. Fiber will penetrate the rack, but no one knows for sure how fast or how deep.
Coda: CPO as a cultural shift
CPO’s success depends on more than bandwidth density and power-per-bit metrics. It depends on whether operators can trust it. Right now, users are intrigued but skeptical. Hyperscalers are moving forward because some of them suspect that CPO will be the only long-term strategy for scaling AI fabrics, even if CPO disrupts the service model that has defined optical networking for decades.
Over the next decade, the attitudes of operators will shift from “this looks risky” to “this is how modern AI networks work.” And when that shift happens, CPO will no longer be discussed as optics technology. It will be discussed as an infrastructure destiny.

