Fiber bandwidth glut: Is that the problem?

March 1, 2001

Carriers that cannot meet the demands of their customers may have to look beyond whether fiber is available.

ROB NEWMAN, Altamar Networks

Market and financial analysts have recently caused a stir in the optical community by challenging the long-held belief that there is not enough fiber in the ground to handle increasing traffic demands. Such a position, at first, seems incredible. Traffic demands are growing explosively and data traffic has overtaken voice traffic in volume. At the same time, we know it takes a long time and a lot of money to trench new fiber across the country. If this is true, how could there be a fiber bandwidth glut?

It is the very nature of the fiber-bandwidth-glut argument, being counter to established logic, that makes it a hot debate. But the debate also has important undertones that impact the business models and valuations of carriers and equipment vendors alike. Maybe business models created for constructing and owning fiber networks are not important. Maybe the fiber manufacturers themselves, which today can't get close to meeting demand, will soon crash into a wall of too much supply-right after enormous capital spending on manufacturing facilities to meet demand.

Because of the implications of the debate-and its controversy-it is certainly worth a closer look. The answers are surprising. Carriers will still have difficulty meeting bandwidth demands of their customers but not because of fiber issues.

For a period approaching two decades, carriers in the United States have deployed fiber almost exclusively as the medium on which to transport all traffic between switching offices. With fiber came the benefits of incredible capacity increases, better performance, and smaller size.

But over the past five years, data traffic demands have exploded, generating a lot of research in improving the carrying capacity of fiber. Three areas of capacity improvement have been developing alongside each other:

  • Increasing the number of fibers per bundle.
  • Increasing the number of wavelengths carried per fiber.
  • Increasing the data rate carried on each wavelength by increasing the modulation rate.
Increasing the number of fibers per bundle has been driven primarily by economics, rather than by technical advances (it costs about the same to lay a bundle of 144 fibers as it does to lay a bundle of 16 fibers.) In contrast, the wavelength count per fiber and the data rate carried by each wavelength have both been following a technological improvement curve. Improvements in photonics technology have been much faster than Moore's Law. From single-wavelength OC-48 (2.5 Gbits/sec) transmission systems to 80-wavelength OC-192 (10-Gbit/sec) systems, the advances have taken less than five years. Systems that handle 160 wavelengths and 40 Gbits/sec have already been demonstrated.

Figure 1. Using two curves, the number of available OC-48 streams in the average central office over time is shown as well as the likely peak number in a very large central office.

It is the combination of these three areas of improvement that has led to the "bandwidth glut" concern. If we just take the state-of-the-art systems being deployed at the beginning of 2000, it is possible to see the reason for the concern. Assuming 120 fibers between two cities in a single bundle, there are 60 fibers for each direction. On each fiber, it is possible to handle 80 channels of OC-192 signals. That's a switching-office-to-switching-office capacity of 48 Tbits/sec, or 19,200 OC-48 streams.

Figure 1 uses two curves to represent the number of OC-48 streams available for switching in the average central office over time and the likely peak number in a very large central office.

The bandwidth capacity of the installed fiber plant is already very large and growing rapidly. What about the demand?

Most attempts to answer this question revolve around a discussion of exponential growth in the number of Internet hosts, with some discussion of the growth in the number of net users (which is bounded). Sometimes, this discussion is extended to consider the growth in bandwidth needs of new applications (such as MP3 and digital imaging.), which is unbounded and unknown.

Usually missed, however, is the effect of cost on bandwidth demand, which shows price elasticity-revenues increase if the price per bit/sec decreases. In other words, for a 50% price decrease, traffic volume increases by more than 50%, for a net revenue increase. The increased demand resulting from new users and applications will not be realized unless the cost per bit/sec decreases dramatically.

So it is reasonable to ask: What prevents the cost from decreasing? Is there any good reason that traffic volumes should not continue to increase exponentially? Where is the cost bottleneck or the barrier to scaling the network throughput? The fiber-bandwidth-glut argument suggests that it is not the fiber plant, at least in most cases.

In trying to understand why there still appears to be a problem with bandwidth supply, we initiated a series of discussions with leading carriers. We chose to speak with the long-haul carriers deploying their own fiber networks. The discussions centered on how they are building their transport networks today and how those network architectures will change over time.

The feedback from the discussions was very consistent between the carriers in this group. All of the carriers were or soon would be deploying the most advanced optical products available today: OC-192 systems. They were using DWDM at least to the level of 40 wavelengths per fiber, with some already at 80 wavelengths. All carriers had plans to upgrade to 160 wavelengths, and they felt significant pressure to continue to expand the capacity of their fiber plant.

Figure 2. This graph shows a gap between bandwidth link capacity and switching capacity, making it evident that there is now a shortage of switching bandwidth rather than link bandwidth.

But the interesting insight from these discussions came at the next level. How are the carriers making use of all of this bandwidth? The universal answer is that they are deploying SONET systems as they have historically done. But the traditional SONET ring and add/drop multiplexing (ADM) approach is becoming a serious barrier to scaling.

Think of the scenario in Figure 1, in which we showed 57.6 Tbits/sec of capacity on a point-to-point link between two offices. Using OC-192 SONET ADM systems, it would require 5,760 boxes (each using a full 7-ft rack) to terminate all of these optical signals-and that's a lot of space and power.

Additionally, that is only to support links to one other office. To build a network, more ADMs are required for rings to other offices and OC-48 circuits must be handed from ring to ring by patch cables.

The service velocity supported by this architecture also has these long-haul carriers very concerned. To establish a high-capacity service across the country requires multiple steps and manual patch-panel work. The carrier has to set up a circuit across each SONET ring independently, then have a technician manually patch together the SONET boxes.

The carriers have all realized these issues and responded by looking to a new architecture based on wavelength switching in the core of the optical network. Wavelength switching allows the carriers to build more flexible architectures, do point-and-click provisioning of wavelengths, and lower both the capital cost and operating cost dramatically. And the equipment vendors have responded with products, just now being deployed, that meet this requirement.

Does the new architecture for core optical transport with wavelength switching remove the barriers to traffic and revenue growth? Not quite. Return to Figure 1, which defines the link capacity between switching offices and a major long-haul carrier that owns its own fiber plant. Now, on this same graph, let's plot the switching capacity of these optical-switching products.

To do the plot, we need again to determine current switching performance and estimate rate of improvement. The state-of-the-art optical switches, just now being deployed, are 512-port systems, capable of an OC-48 per port. The aggregate capacity of the switch is just over 1 Tbit/sec. Since these switches are based on electronic-switching cores, we should assume Moore's Law of improvement in switching capacity. That means switching capacity doubles every 18 months. Figure 2 shows our original graph redrawn to add the optical-switch capacity in addition to the fiber capacity.

Now, we are getting closer to understanding the bandwidth-glut problem. Fiber and fiber capacity are no longer the issue. With all of the advances of fiber bandwidth, there has been an inversion. Now, switching bandwidth is the scarce resource not the link bandwidth.

If switching capacity is the scarce resource and the capacity of current electronic core switches is insufficient, we could possibly look to the new technology of photonic switching. It has a couple of immediate advantages, one of which is bit-rate transparency. Photonic switches introduce loss and perhaps some other optical domain signal degradations, but they don't care how many wavelengths are multiplexed into each port or how those wavelengths are modulated. As the modulation speed on the fiber improves from OC-48 to OC-192 and up to OC-768 (40 Gbits/sec), the port can handle the increased speeds. The other advantage is that this technology is new and possibly has a rate of improvement faster than electronic-based switching.

Figure 3. A strictly nonblocking five-stage Clos fabric, as constructed here, can support 49,152 ports.

While there is more potential for photonic switching than current electronic switches, it is still not enough to meet fiber capacity. And today's photonic switches have three significant disadvantages: wavelength blocking, difficult performance monitoring, and coarse switching granularity.

Wavelength blocking is problematic since photonic switches merely switch the light and cannot alter the stream. Thus, when building networks with these switches, collisions will occur between circuits from two sources (using the same wavelength) converging on a link. Both circuits cannot use the same wavelength, requiring one to be translated. In larger networks, this blocking problem quickly becomes unmanageable without the use of wavelength translation. Today, wavelength translation requires a return to the electrical domain and modulation of another laser-in fact, a transponder.

Since the signal is not returned to its electrical form through the switch, it is much more difficult to do performance monitoring on the stream. Also, switching granularity can be no finer than the modulation rate of a wavelength. If it is necessary to switch at OC-48 granularity (a common requirement), then either the wavelength modulation rate must be limited to 2.5 Gbits/sec or the higher-rate signal (e.g. OC-192 modulation rate) must be returned to the electrical domain for multiplexing, demultiplexing, and switching.

It seems likely that photonic technology will improve to address these issues over the next several years, but the solution is not available in the near-term deployment time frame for carriers.

Electronic switching has all the operational characteristics a carrier would desire: performance monitoring, smaller switching granularity, and wavelength translation. All that is needed from the carrier's requirements perspective is radically larger switching capacity achieved with high density, low power, and low cost. Is it possible to construct a different switching architecture that allows these requirements to be met?

From our fiber capacity curve, we see today's necessity for a switching capacity of 100+ Tbits/sec and growing over the next three years to close to 1 Pbit/sec, assuming that multiple fiber bundles converge on an office. But we also don't want to pay for all of that switching capacity today. Through the use of some new technology components and through novel switch implementation architecture, it is possible today to build a switch based on electronic core that meets all of the carrier's operational requirements, scales linearly from eight ports of OC-48 switching to 1 Pbit/sec, and has high density and low power for deployment in the central-office space left over between all of the SONET boxes.

To achieve wavelength switching in the electrical domain, the first technology component required is high-capacity switching chips. For the switch fabric, we can use off-the-shelf crossbar chips providing 64x64 ports with 2.5-Gbit/sec serial interfaces. A strictly nonblocking five-stage Clos optical-electrical-optical (OEO) switch fabric supports 49,152 ports (see Figure 3).

Since transponders are required from the transmission system before entering the switch, we can take advantage of that. We can extract a serial electrical signal at 2.5 Gbits/sec from the transponder receiver and pass it through the electrical switch fabric to the transponder transmitter. Thus, the transponders become the input/output for the switching system, providing regeneration and performance monitoring, as well.

The fabric is packaged in only 53 racks. This density is achievable by switching at an OC-48 granularity (rather than switching packets or STS-1 52-Mbit/sec payloads) and by using 64-port crossbar chips, 12-way parallel "optical backplane" interconnects between shelves, and 2.5-Gbit/sec serial electrical interconnects between chips and across high-density backplane connectors.

According to the carriers, the next big frontier for the rapid advancement of technology needs to be in the area of switching wavelengths at the core of the network. SONET clearly can't address the need, and carriers know this. First-generation switches, based on both electronic and photonic cores, are now being rolled out to reduce the pressure on SONET. But a quick look at the capacities of fiber transmission and the current switching technologies demonstrates that scale and cost discontinuity is required in switching solutions.

Over the next few years, given the maturity and flexibility of electronic-switching cores, this approach appears to be the best chance for removing the switching bottleneck. In fact, based on technologies available today, products will soon appear that allow switches to scale to 1 Pbit/sec. That is important for carriers owning their own fiber networks, if they are to unlock the value of all of the bandwidth buried in the ground.

Rob Newman is executive vice president for market development at Altamar Networks (formerly Ditech Communications Corp.), headquartered in Mountain View, CA.

Sponsored Recommendations

Advances in Fiber & Cable

Oct. 3, 2024
Attend this robust webinar where advancements in materials for greater durability and scalable solutions for future-proofing networks are discussed.

How AI is driving new thinking in the optical industry

Sept. 30, 2024
Join us for an interactive roundtable webinar highlighting the results of an Endeavor Business Media survey to identify how optical technologies can support AI workflows by balancing...

Balanced vs. Unbalanced PON: Key Differences and Deployment Impact

Nov. 7, 2023
Learn how to choose the right PON architecture for your network.

The Pluggable Transceiver Revolution

May 30, 2024
Discover the revolution of pluggable transceivers in our upcoming webinar, where we delve into the advancements propelling 400G and 800G coherent optics. Learn how these innovations...