Integration of switching and transport at the optical layer

May 1, 2002

Bringing together DWDM, switching, and SONET into one platform can lower capital costs and offer management simplicity.

MARC SCHWAGER and IAN WRIGHT, Altamar Networks

The optical transport network has evolved rapidly over the past several years with the introduction of new technologies like DWDM and optical switching. These technologies have been deployed alongside the continued rollout of SONET. With the insatiable demand for new bandwidth, the central office (CO) has evolved with many different systems installed-all linked together by manual fiber patch panels.

In the calmer business environment of today, carriers are slowing their capital and operational spending. As such, they are seeking technologies and solutions that lower the economic cost of building an optical transport network and are less interested in the introduction of radical new technologies.

One ripe area for significant reduction in the cost and complexity of the optical network is the integration of separate functions of the network. Specifically, bringing together the DWDM, switching, and SONET into one platform can lower capital costs and offer a simpler CO environment to manage. Also, ongoing costs such as space, power, service provisioning, maintenance, and adding capacity can be reduced through the use of an integrated solution.

Integrated OTN
There are a small number of startup vendors addressing the requirement for integration of functions in the optical transport node (OTN). While the technology component of each solution differs, the overall approach is similar. Figure 1 shows the essential idea behind integration, with a typical architecture for today's CO versus a typical architecture with an integrated OTN. The basic idea is the elimination of multiple layers of equipment. There are really two key ideas to building an integrated OTN:

  • The DWDM system becomes the input/output to the switch. With existing practice, the DWDM equipment takes in a fiber with a large number of channels on it and demultiplexes that to a number of individual short-reach optical signals. Breaking out each of the optical signals, one for each wavelength, adds cost and complexity. The cost is derived from having a full-duplex short-reach optical system for each wavelength, while complexity comes from having to manage the numerous physical connections between boxes.
  • The SONET functions can be included in the switch, further validating the design of an integrated OTN. There are a number of elements to SONET functionality that must be replicated in the integrated OTN. The easiest of these functions include frame formatting and header processing. Highly integrated circuits are available to perform these functions. Some of the higher-level functions, such as ring healing, require more development. However, it is possible to place all of the SONET functions within the switch part of the integrated node.

Putting these ideas together, it is essentially higher-function silicon chips and smarter system design that enable development of an integrated OTN.

Business case for integration
At an architectural level, it is obvious that the integrated OTN means less cost and complexity. But for carriers to justify the deployment of these systems, they must know the exact amount of the savings. There are a number of components to the cost savings, and each can be considered separately. The three major components are initial capital cost, ongoing costs, and network-management costs.

The economic case for capital cost reduction is fairly easy to establish and very significant. Similarly, there can be reasonable models created for the ongoing costs, such as space and power, and the result of this analysis is again significant savings. The models for network-management costs are a little more difficult to create, but there is considerable reduction in network complexity.

The Yankee Group, a Boston-based industry research and analysis firm, estimates 45% of the total network cost to a carrier is associated with network management and staff; therefore, savings in this area are important. Equally important is that integration can lead to considerable reduction in time to establish new services-service velocity can be taken from weeks to minutes.
Figure 1. An integrated optical transport node in today's central office eliminates multiple layers of equipment, meaning less cost and complexity.

One method of determining the capital cost benefit of integrating transmission with switching is to evaluate a specific implementation of an integrated solution. From this specific product, it would be configured in three ways: as a standalone switch, a standalone DWDM system, and an integrated OTN. By combining the cost of two standalone systems joined by a short-reach optical interface, a cost reference is provided for a non-integrated solution. This cost is then compared with the same function performed by the integrated version of the solution.

For the purpose of economic analysis, a good example is a configuration of 128x128 ports of OC-192 (10 Gbits/sec). This example would have 128 wavelengths at 10 Gbits/sec on the DWDM system and a switch core to support all the traffic. The 10-Gbit/sec speed is used since it is the most common transmission speed in the core of today's optical network. The 128 ports of OC-192 switching represent a typical large switch deployed today.

In pricing the non-integrated implementation, data is available from analyst firm RHK Inc. (San Francisco) for DWDM system pricing and optical-switch system pricing (see Table). In the Table, the OC-192 short-reach (SR) and long-reach (LR) interface costs have been disaggregated from the switch and DWDM costs. This data is based on implementation experience using the RHK data as a baseline.
Figure 2. In the non-integrated solution, there are two realizations of the short-range interface: one in the switch and one in the DWDM equipment. These realizations are not required in the integrated solution.

Figure 2 shows the two models for building the OTN. In the non-integrated solution, there are two realizations of the SR interface-one in the switch and one in the DWDM equipment. These realizations are not required in the integrated solution. Also, the DWDM common equipment is eliminated in the integrated solution. The Table reflects the quantities of each type of component required. Note that the units in the Table represent an aggregation of everything required to support a 128x128 solution. For example, one unit of OC-192 LR in the Table represents 128 OC-192 interfaces.

This specific example shows considerable capital cost savings based on integration-in this specific case, approximately 25% savings. The savings result primarily from the elimination of SR cards, a reasonable percentage of the non-integrated configuration. The amount of common equipment is reduced and no separate shelf is required for the DWDM equipment. This benefit also leads to space and power savings.

Cost is key
One of the limitations with this analysis is that it is based on industry-average pricing data. Specific implementations will vary from this average. One of the areas of variation is in the cost of switching. Recent announcements of new switch chips have considerably changed the switch landscape. These chips enable the manufacture of much bigger switches at considerably lower cost. Again, based on our implementation experience, we would expect the switch core costs to halve for the same size switch versus the RHK data.

This lower-cost switch core is very important in the overall savings available to the carrier. Repeating the calculations in the Table, but with the assumption that the switch cost is half that of RHK data, yields an integrated solution cost of $6.1 million for the same configuration. That represents a 32% savings for integration using the same switch technology. More important, the savings relative to the non-integrated case with the older switch cores are 50%. The implication is that carriers deploying optical networks in 2002 should be seeking integrated solutions with highly dense switch cores to get the best possible capital costs.

One element not included in the above calculations of capital cost savings is the line system. That includes amplifiers, dispersion compensation, and multiplexers. It is assumed that this equipment is independent of wheth er the CO OTN is integrated. Therefore, this equipment does not need to factor into the calculations. Of course, if the carrier's total spending on the optical layer is considered, the economic case for integration is relatively less since the savings are amortized over a larger spending base. Conversely, the business case for integration, when all costs are considered, will be strongest when the length of the fiber runs between COs is short (reducing the total number of amplifier sites).

To put some perspective on the extent of gains achieved by these reductions on capital cost, look at the total carrier spending on these types of systems. The Yankee Group predicts that in the year 2004, the total spending on core switching and long-haul DWDM will be $39 billion. While savings included in that figure only apply to the CO equipment, the 30%-plus capital costs savings calculated still represent multiple billions of dollars in savings to the industry where integrated OTNs are adopted.

Business case for costs
The main elements of ongoing costs are space and power requirements. As a rough approximation in estimating the savings for space and power, it can be assumed that the savings are about equivalent to the capital cost savings: 25% to 50%. Getting a more precise estimate of these savings is a little more difficult because space and power requirements vary considerably between different vendors' implementations.

But it is instructive to delve a little deeper into these numbers. Using the example considered for the capital-cost comparison of a 128x128 OC-192 port solution and considering currently available technology, we can compare typical space requirements. According to vendors currently shipping a switch to support that capacity, the space requirement would typically be about four full racks. Similarly, an OC-192 solution to support 128 wavelengths would also require about four racks.

Based on implementation experience, the solution in terms of the space requirements may be broken down as follows. The switch core and switch in ter face logic together take two-and-a-half of the four switching racks. The remaining one-and-a-half racks are associated with the OC-192 SR ports on the switch. Similarly, in the DWDM equipment, one-and-a-half racks are required to support the OC-192 SR interfaces. The remaining two-and-a-half racks support the LR interfaces.

In an integrated solution, the space required for the SR interfaces on both the DWDM equipment and on the switch is eliminated. The total space requirement for the integrated solution is five racks-a 38% space savings.

There are also further space savings available since fiber must be run between the DWDM equipment and the switch in a non-integrated solution. Typically, this fiber would go via a fiber patch panel. Such a patch panel might consume another rack. Taking the elimination of this patch panel into account would increase the space savings to 45%.

It is a little more difficult to make predictions about power requirement savings based on integration of DWDM and switching because data is generally not made publicly available. It's expected that as a guide, power savings would track space savings fairly closely. Hence, power savings on the order of 40% might be expected.

Network-management costs
By far, the most difficult economic benefit to model relates to the operational and network-management costs, since operational practices vary from carrier to carrier. Again, data on tracking workflow and identifying how changes will affect workflow costs is not readily available. But it is worthwhile identifying the elements that contribute to the reduction of management and operational costs.

The most obvious contribution to the reduction of management costs due to integration is having a single management interface for the optical transport layer. Another key element is the elimination of fiber patching between equipment, which eases the establishment of new services, maintenance of services, and troubleshooting.

In the establishment of new services, there is quite a difference in the practices required between the two cases. In the case of the non-integrated solution, the establishment of a service would require setting up a wavelength on each of the paths between COs. In the case of a circuit across the United States, that could be the establishment of many individual circuits-one for each DWDM link. Separately, the circuit is established at the switching layer using the management system for the switch. There is likely to be some manual involvement of the network planning staff at this step to ensure the route selected by the switching equipment uses DWDM links where spare wavelengths are available.

Once the planning step and provisioning of the DWDM and switching equipment are completed, there is an additional manual step. At each CO in the path of the circuit, a fiber patch must be completed from the DWDM link to the switch and from the switch to the next DWDM link. Thus, in the case of a circuit traversing 10 COs, the establishment of the circuit would require a planning step for the switch network, a planning step for each DWDM link-there are nine of these links, a manual check between the switch and DWDM network, 20 different fiber patches, and testing the circuit.

In contrast, with a fully integrated system, it is possible to run the planning step once, since the system knows all of the DWDM and switch capacity available. Then the circuit can be automatically configured. There is no need to do the manual patch-panel step, since the DWDM interface is included in the switch.

Given that current best practice for establishing OC-48 (2.5-Gbit/sec) and OC-192 circuits in the long-haul network takes weeks, there is considerable opportunity to lower this time for establishing service. With equipment already deployed in the network, it is possible to set up circuits in a fully integrated solution within minutes.

Maintenance and troubleshooting of services have similar benefits for the integrated case. Switching circuits to alternate routes have similar gains to the circuit setup example. Troubleshooting problems can also become easier since manual steps have been eliminated and all equipment and services are visible under one management system.

So while it is difficult to estimate cost savings from the operational benefits of integration, it is clear the time to deploy services is radically reduced. Other benefits result from having less equipment to install, which, based on the space comparison, is about a 40% reduction in effort.

Putting it all together
The economic business case for integration of DWDM equipment with optical-switching equipment in the CO is quite compelling. Given the level of spending carriers have in this layer of the network and the size of the savings-between 25% and 50%-it is hard to ignore integration. As DWDM and switching technologies mature, it will be hard to achieve these savings purely through improved technology. RHK, for example, predicts savings in these areas at between 8% and 20% per year as a result of technology. Thus, simply selecting an integrated solution can provide an economic benefit equivalent to three to five years of waiting for independent technology improvements in DWDM and switching.

But the most compelling case for the carrier is to choose an integrated solution whereby the switching component uses new general electrical switch cores that deliver much higher scalability and much better density at a much lower cost.

Marc Schwager is vice president of marketing and Ian Wright is chief technology officer at Altamar Networks Inc. (Mountain View, CA). They can be reached via the com pany's Website, www.alta mar.com.

Sponsored Recommendations

Advancing Data Center Interconnection

July 24, 2024
Data Center Interconnect (DCI) solutions provide physical or virtual network connections between remote data center locations. Connecting geographically dispersed data ...

PON Evolution: Going from 10G to 25, 50G and Above

July 23, 2024
Discover the future of connectivity with our webinar on multi-gigabit services, where industry experts reveal strategies to enhance network capacity and deliver lightning-fast...

Data Center Interconnection

June 18, 2024
Join us for an interactive discussion on the growing data center interconnection market. Learn about the role of coherent pluggable optics, new connectivity technologies, and ...

The Pluggable Transceiver Revolution

May 30, 2024
Discover the revolution of pluggable transceivers in our upcoming webinar, where we delve into the advancements propelling 400G and 800G coherent optics. Learn how these innovations...