Minimizing latency in long-haul networks

Oct. 19, 2011
David Mazzarese of OFS writes that the most significant contributor to latency is the fiber – so the right fiber choice is essential for low-latency links. Dispersion compensation approaches also can have a large impact, he advises…

The most significant contributor to latency is the fiber - so the right fiber choice is essential for low-latency links.

By DAVID MAZZARESE

While the widespread use of DWDM technology and 40-Gbps (and higher) transmission rates can push the information-carrying capacity of a single optical fiber to well over a terabit per second - great news for emerging data-hungry applications such as on-demand high-definition video - a few applications depend not only on how much data can be transmitted but also (and perhaps more important) how fast the information can be transmitted. These applications have changed the definition of "optimum" for a long-haul network.

Latency describes how long it takes for data to get from point A to point B. In an optical system, it's essentially the length of the optical fiber divided by the speed of light in the optical fiber plus any data processing "overhead." For example, transmitting a signal from New York City to Chicago and back takes about 15.9 ms using a traditional low-latency link. Why would it be necessary to reduce this seemingly insignificant period of time even further? This paper looks at the demands for minimizing latency and the optical fiber considerations that can help achieve this objective.

An increasing number of enterprises require a low-latency connection for their applications. Carriers face important choices when engineering their networks to meet this need.

Market drivers for minimizing latency

Humans can barely perceive a microsecond, but a computer can process thousands of commands in the same time frame. Thus, a millisecond becomes noticeable for operations conducted entirely by computers.

One example can be found in investment firms that manage mutual funds with enormous numbers of stock shares. The managers of these funds have developed preprogrammed rules to sell or buy particular stocks at particular price points. Once programmed, the computer receives information from Wall Street and issues commands to buy or sell.

This is where latency becomes very important. The rule on Wall Street is "first come, first served." If your sell order arrives earlier than that of other firms (even a fraction of a millisecond earlier) you get the sale. Trading a millisecond faster than the competition can be worth significant amounts of money for large financial firms. As a result, these firms compare data services from several providers, and the one with the lowest latency will consistently win their business. Further, these firms will validate the latency on a regular basis to ensure that the service provider delivers what they promised.

A second key driver for lower latency is validation of transferred data (or, as it is often called, the "handshake"). For example, most businesses are required to store their data at multiple locations. The easiest way to do this is to have a mirrored disk drive at a remote location. When storing data remotely, the optical transport of data between the two locations must be "transparent." In other words, the network user should see no difference between using the co-located disk drive or the one at a remote location.

Over time, too much delay between sending a signal to a remote site and receiving the acknowledgement that the data packet arrived successfully can slow communication to the remote site considerably. Private networks encounter similar difficulties when they send data back and forth between two computers to achieve a greater level of security. Thus, when data passes back and forth between computers, minimizing latency helps ensure efficient data transfer.

Network elements affect transport time

It would seem self-evident that latency should depend primarily on the speed of light and the distance we transmit data. And it does - but how we transmit that data and what our network looks like can affect signal propagation time as well.

For example, light travels down optical fibers at about 127 miles (204 km) per millisecond. We can use MapQuest's service to show the distance from New York City to Chicago is 812 miles (1306 km). So getting the signal from New York to Chicago and back should take as little as 12.6 ms, and not the 15.9 ms referenced above. So where do the other 3.3 ms come from?

There are certain network elements necessary for this long-distance communication that can affect the transport time. These elements include data packing, switching, signal regeneration, amplification, chromatic dispersion correction, and polarization mode dispersion (PMD) correction. The type of optical fiber in the network and how it works in conjunction with these other elements can dramatically affect the latency observed in the optical network as well.

The fastest way to transmit a signal is to minimize the overall path length and keep the signal in the optical domain. Using optical amplifiers and reconfigurable optical add/drop multiplexers (ROADMs) helps accomplish this second goal - but dispersion control poses a challenge. Avoiding any regeneration steps is critical. Optical-electronic-optical (OEO) conversion takes about 100 µs, depending on how much processing is required in the electrical domain. So let's look at how optical fibers can help minimize regeneration requirements.

Use of optical fiber with ultra-low PMD offers one way to avoid signal regeneration. PMD is a statistical time-varying parameter that is difficult to correct. If too much PMD accumulates across a fiber link, the data stream becomes illegible and the signal must be regenerated; this, of course, adds latency. Though the ITU-T has specified a PMD link design value (LDV) requirement of 0.20 ps/sqrt (km), there is value in deploying fibers with an ultra-low LDV of 0.04 ps/sqrt (km) to eliminate the need for OEO regeneration.

Meanwhile, chromatic dispersion occurs because different wavelengths of light travel at different speeds in optical fibers. There are several ways to correct this. The simplest is to transmit the data at a slower rate so chromatic dispersion does not degrade the system. However, lower data rates are seldom desirable. For applications that demand higher data rates, the chromatic dispersion therefore must be "undone" to generate an understandable data stream. Unfortunately, this process can expend precious microseconds.

For example, an optical-fiber-based dispersion compensation module is the most typical method of correcting dispersion. These modules typically have 1 km of dispersion compensating fiber for every 10 km of transmission fiber. As a result they add about 10 percent to the transmission time.

But it's important to note that not all fibers can use the same module to correct chromatic dispersion. Conventional singlemode fibers require more compensation than optical fibers with lower chromatic dispersion. For example, non-zero dispersion fibers (NZDFs) were developed to simplify chromatic dispersion compensation while making a wide band of channels available. Since these fibers have lower dispersion than conventional singlemode, simpler modules are used that add only about 3 percent to the transmission time for low-slope NZDF. This enables a lower latency than conventional singlemode fiber.

But some NZDFs with higher dispersion slope require more complex dispersion compensation modules as well. Though the amount of dispersion is low at 1550 nm, these modules require longer lengths of fiber to correct for the dispersion slope. The latency for networks using these fibers may be larger than those using standard singlemode fibers with a dispersion compensating module.

Innovation reduces dispersion compensation time

The simplest way to minimize latency is to deploy a cable with no dispersion, which eliminates the need for chromatic dispersion compensation. Dispersion-shifted fiber offers one way to accomplish this. These fibers have no chromatic dispersion at 1550 nm. However, they are limited to single-wavelength operation due to non-linear four-wave mixing.

A more functional solution balances the dispersion in the route by using positive- and negative-dispersion segments similar to the practice in submarine cables. This design integrates the dispersion-compensating fibers into the transmission cable, which removes the latency penalty associated with correcting the dispersion.

New low-latency requirements for connections between data centers continue to emerge.

Another new technology now reaching deployment uses fiber Bragg gratings to compensate the chromatic dispersion. These devices can correct several hundred kilometers of dispersion without any significant latency penalty.

It should be noted that some of the new transmission formats such as dual-polarized quadrature phase-shift keying (DP-QPSK) using coherent detection require no dispersion compensating fibers. However, they can be a poor choice from a latency perspective because of the added signal processing time they require.

The last important consideration in minimizing latency is excess length. Having the shortest path will always help minimize latency. Any loops or excess length in the optical cable adds approximately 2 percent to the overall distance. Thus a 100-mile link will have approximately 102 miles of optical fiber.

Service providers and their customers must account for this extra distance in determining the latency of any optical link. Using the techniques described in this paper, there are now networks that have round-trip latency between New York and Chicago of less than 13.5 ms. This is more than 2 ms faster than what was available a few years ago. The Table on page 12 summarizes the latency for the optical transport portion of various transmission options for reference.

Making the right choices

Most fiber networks are designed to move as much information as possible. For some systems, however, transport time is critical and latency must be considered. In these situations, selecting the right transmission fiber can shave hundreds of microseconds off the transport time.

Therefore, careful selection of the transport fibers can greatly increase the value of the optical network. Balanced dispersion provides the lowest latency, but this comes at the cost of the greater engineering required to manage the network deployment.

DAVID MAZZARESE is manager, fiber systems engineering, at OFS.