Economics drives carriers to metro optical switching

March 1, 2002
Metro: optical switching

There is a financial relationship between carriers' deploying next-generation optical cross-connects and reducing their capital and operational costs.

By Aric Zion Nortel Networks

Several principles have guided the evolution of networking from the beginnings of SDH; towards higher bit rates; towards the development of DWDM; the increased focus on the "connect space" (backbone and major metro optical transport hubs); and towards more-optical networks. Chief among these principles has been the decreasing amount of optical-to-electrical and electrical-to-optical conversions (OEO). By OEO, I mean any form of optical transmitter or optical receiver, regenerator or wavelength translators, and transponders, which are all the cause of much capital and operational cost.

With the pending obsolescence of much of the world's cross-connect infrastructure due to lack of scalability, and with the growing need for high-capacity backbone and metro ring interconnection, service providers are confronted with a discontinuity that presents a great opportunity. The discontinuity clearly requires the introduction of highly scalable optical cross-connects (OXCs) into the network.

So what are the possible means of reducing CapEx and OpEx afforded by the discontinuity of the emerging backbone and large metro hub connect spaces?

Further reduction of OEO numbers in the optical network can be achieved through drastically cutting the number of interconnects and boxes (muxes and translation devices) required for bandwidth management.

Many backbone and large metro hubs today consist of numerous 2.5 and 10Gbit/s ADMs which terminate individual rings that have converged on a given hub. In such hub applications there are several inefficiencies.

To pass traffic from one ring to another in these hubs, service providers frequently interconnect the tributary interfaces of the rings. This results in a complex mesh of fibre connections which introduces operational complexity as the network scales up, quickly limiting the hub's potential to scale further.

Some other service providers have VC4 cross-connects in their networks in order to handle this situation, i.e. the converging 2.5 and 10Gbit/s rings all drop their traffic via their respective tributary interfaces, and it is fed into a VC4 cross-connect. However, in high-capacity backbone and metro hubs, the generation of commonly deployed VC4 cross-connects are nearing or have reached their operational limit.

Still other service providers employ a combination of the two above techniques:
passing traffic from 2.5 and 10Gbit/s ADMs to a VC4 cross-connect alongside direct ADM-to-ADM traffic handover, in order to put some distance between themselves and the brick wall of non-scalability. However, this approach only somewhat delays the inevitable - and at the price of increased complexity.

In all of these cases, we have high CapEx and OpEx associated with all of the tributary-to-tributary interconnects, be it directly between ADMs or between ADMs and cross-connects.

Within hubs ADMs are often connected to separate systems which perform wavelength translation so that the traffic from multiple ADMs and routers can share the same fibre pair. Again, this arrangement incurs capital and operational expense from introducing separate wavelength translation devices.

Today's traffic is generally sold in increments less than or equal to a VC4, and in order to fill wavelengths efficiently, traffic must be managed at the VC4 level. It is possible to deploy an optical cross-connect that manages at the 2.5Gbit/s level (groups of 16xVC4s). However, for almost all of today's traffic and much of the traffic for the years to come, 2.5Gbit/s bandwidth management only succeeds in forcing traffic into the electrical domain several additional times in order to perform VC4 management on a separate device (both at the edge of the network and further in towards the core of the network where optimal optical pipe fill is required to realise transport efficiencies). This results in extra OEO conversions, which is precisely what we are trying to avoid.

One can introduce a VC4 OXC into the large backbone and metro hubs in order to increase the scalability of these sites and the network in general. But unless the OXC can scale linearly - well beyond a terabit - scalability will be limited. Therefore OXCs should have the following capabilities: 10Gbit/s interfaces as well as 2.5Gbit/s ones; 40Gbit/s evolution potential; DWDM-capable interfaces; and integrated support for the world's large and still growing base of both SNCP and SPRing systems

It is worth considering some proof points with respect to both DWDM-capable interfaces and integrated SNCP and SPRing support. Figure 1 illustrates a hub where there are separate racks of kit for VC4 cross-connection, for DWDM termination and origination and for 10Gbit/s circuit termination. In addition, there are racks of kit for rectifiers, batteries and fibre patch panels. And there is also a representation of space required for generator backup. The number of racks of each of these various types of kit is correct for a typical large hub site handling 640Gbit/s.

Figure 2 illustrates the optimised connect space that is achievable with a next-generation OXC. Since the next-gen OXC has integrated SPRing functionality, there is no need for separate 10Gbit/s ADMs. There is also no need for expensive tributary interfaces that consume both power and floorspace, and that would have served no purpose but to interconnect 10Gbit/s ADMs and the cross-connect.

Since the next-gen OXC has DWDM-capable interfaces, there is no need for DWDM bays that would have served no purpose but to provide wavelength translation. Wavelengths emerging from the next-gen OXC are tuned to a DWDM grid.

At first it might seem as if this was simply a CapEx benefit due to the reduction in optical kit. Indeed it is, but not just in terms of reduced ADMs and DWDM kit, but also in terms of reduced rectifiers, batteries and generator sizing as these are tightly coupled to the reduction in the amount of optical kit. The first-generation OXC based networks consumed excessive power due to the extra tributary interfaces, the extra ADMs and the extra DWDM translators. However, for every kilowatt of power consumed or saved by optical kit in a network, one must also consider the fact that approximately 1.25kW are needed for site cooling.

When taken together - the decrease in optical networking muxes and translators; the decrease in rectifiers and batteries; the decreased floorspace from reducing the required generator sizing - the impact on network economics is substantial.

In order to illustrate this, we have used the following market-based assumptions: a cost per kWh of 7.5 Euro cents (6.5 US cents), and a European floorspace average lease price of Euro920 (USD800) per month for the optical kit racks. The net savings over five years for a next-gen OXC based hub versus a first-gen based hub is as follows:

  • CapEx savings on optical kit of 63%
  • Electricity savings of Euro390,000 (USD339,000)
  • Total footprint reduction of 30 bays (including rectifiers and batteries)
  • Footprint cost reduced by Euro1million (USD912,000)

So CapEx and OpEx savings are significantly reduced, while network reliability is actually increased - as the number of optical interconnects is reduced. In the above scenario, for example, there would be a reduction of 640 optical interfaces in the hub scenario (91%). The reduced number of optical elements offers other benefits: easier trouble-shooting; extra services enabled by intelligent OXCs; and increased time-to-revenue.

The carriers' need for these benefits is clear. The opportunity for network evolution through the discontinuity of the evolving connect space becomes obvious, and one sees clearly that this is a discontinuity that offers a turning point not only in network efficiency, but for revenue generation as well.

Fig. 1 A "first generation" OXC has been deployed. This is a cross-connect that does not scale well beyond 640Gbit/s. It is without 10Gbit/s interfaces; it does not have integrated SPRing or SNCP functionality; and lacks DWDM-capable interfaces.
Fig. 2 The optimised connect space achievable with a next-generation OXC. Since the next-gen OXC has integrated SPRing functionality, there is no need for separate 10Gbit/s ADMs and no need for power hungry, bulky tributary interfaces that would have served no purpose but to interconnect the 10Gbit/s ADMs and the cross-connect. Also, there is no need for DWDM bays, as wavelengths are already tuned to a DWDM grid.

Aric Zion
Solutions Marketing,
EMEA Optical Long Haul
Nortel Networks
[email protected]

Aric Zion has spent the past 14 years working in the computing and telecommunications industries in N.A., Europe and Asia. He is currently responsible for EMEA Optical Long Haul product marketing and business development at Nortel Networks.

  • Provide maximum capital and operational expense (CapEx and OpEx) reduction
  • Be 10Gbit/s capable and 40Gbit/s ready
  • Support multiple classes of service
  • Provide all required granularities of bandwidth management
  • Scale linearly in cost
  • Be part of a greater network strategy of evolving to photonic cross-connection

Sponsored Recommendations

ON TOPIC: Innovation in Optical Components

July 2, 2024
Lightwave’s latest on-topic eBook, sponsored by Anritsu, will address innovation in optical components. The eBook looks at various topics, including PCIe (Peripheral...

PON Evolution: Going from 10G to 25, 50G and Above

July 23, 2024
Discover the future of connectivity with our webinar on multi-gigabit services, where industry experts reveal strategies to enhance network capacity and deliver lightning-fast...

Advancing Data Center Interconnection

July 24, 2024
Data Center Interconnect (DCI) solutions provide physical or virtual network connections between remote data center locations. Connecting geographically dispersed data ...

The Journey to 1.6 Terabit Ethernet

May 24, 2024
Embark on a journey into the future of connectivity as the leaders of the IEEE P802.3dj Task Force unveil the groundbreaking strides towards 1.6 Terabit Ethernet, revolutionizing...