Conditioning squeezes the best out of signals

Oct. 1, 2002

Matthew Peach interviews Jeff Livas, VP Systems Technology at Ciena Corp

The first step in signal quality assessment occurs when a signal is received from its source prior to transmission; its power is assessed. Then framing is performed on the signal to monitor its SDH/SONET parameters.

Optical signal power is monitored with a low-bandwidth PIN photodetector on a passive power tap in series with the main optical input. The tap will be typically 2% of the incoming power. Software-provisionable thresholds can generate alarms when the power is outside the pre-set thresholds. In some cases the power monitor also includes some simple circuitry to check for RF modulation on the light as well.

Framing is done as part of the normal process of OE conversion. After the optical signal is converted to an electrical signal and amplified, a clock signal is recovered from the data and used to drive a decision circuit that turns the analogue electrical signal from the photodetector into a digital signal synchronous with the clock.

A digital circuit, the framer, takes this digital signal and looks for patterns that are specific to the modulation format. These patterns are the frame. Both SONET and SDH signals have a frame structure that has an overhead section and a payload section. The overhead section contains extra bytes that contain information about the payload and can be used to monitor quality.

It is possible to simply detect the incoming optical signal, re-shape it, re-amplify it, and then convert to a long-reach signal without framing the signal. The process without framing/re-timing is referred to as a 2-R regeneration (Re-amplification and Re-shaping). If framing is included, it is a 3-R regeneration: re-amplification, re-shaping, and re-timing (Figures 1, 2, 3).

Re-timing reduces the build up of timing jitter, which is the variation of the signal bit-time slots with respect to the clock. Jitter can be a major limitation in long-distance networks, so keeping that under control is important.

The next stage in signal transmission is to convert the digital signal into a long-reach signal. Conversion is achieved by taking the digital data recovered from the client-side optical input and using it to modulate an optical signal that can be propagated over long distances. A typical transmitter of this type uses the digital signal to drive an external modulator placed at the output of a frequency-stabilised DFB laser.

One advantage of the OEO conversion is that it provides a convenient boundary or reference point from which to measure system performance.

Another advantage of using the OEO conversion is that it saves money by allowing different optics to be used for long-distance transmission versus metro or access reach. If the edge equipment is required to generate signals capable of propagating trans-continental distances, it adds cost to a system which, 60-80% of the time, only needs to transmit less than 100km.

Considering long-reach transmissions, one of the main problems is amplified spontaneous emission (ASE) for 2.5Gbit/s signals. This can cause signal degradation. Other common problems are four-wave mixing, self-phase mixing, and cross-phase mixing.

Amplified Spontaneous Emission (ASE) is a fundamental consequence of optical amplification and the main linear signal impairment other than dispersion. Each optical amplifier may be thought of as an ideal gain block with noise added at the input. The amount of noise that is added is proportional to the gain of the amplifier, and it is broadband, i.e. the noise occurs at all frequencies simultaneously. As a signal passes through each amplifier, it is amplified and noise is added, degrading the optical signal to noise ratio (OSNR).

At some point, the OSNR drops below the point where a receiver can make good decisions. For a typical optical receiver operating at 10Gbit/s, this ratio is approximately 100:1, or 20dB. Signal processing can improve this by a factor of 4 to 10. Before the OSNR gets to that point, the signal must be regenerated. That means it must go through an OEO conversion via either the 2R or the 3R process outlined above.

Dispersion is a propagation effect that has both linear and non-linear consequences. Dispersion simply means that different wavelengths of light travel at slightly different speeds. The difference in speeds means that a signal at the far end of a transmission link has some of its parts slightly out of order, resulting in signal distortion. The simplest distortion is a spreading of the pulse.

It is possible to compensate for the dispersion by adding an optical element, typically special fibre, to the system to restore the original timing. If the system is linear, then the distortion is undone. But if the system is non-linear it is not usually possible to restore the original signal quality.

Four-Wave Mixing (FWM), Self-Phase Modulation (SPM), and Cross-Phase Modulation (XPM) are all non-linear effects that depend on the dispersion. They are non-linear effects because the size of the effect depends on the square of the optical power. This means that the signal changes the index of refraction of the fibre as it propagates. A change in the index of refraction changes the speed at which light travels through the fibre, and this results in a modulation of the optical phase.

Four-Wave Mixing is a process by which three signals interact to produce a fourth interfering signal. Two signals at slightly different frequencies propagate together through a fibre, each modulating the index of refraction as they go. The modulation of the index of refraction looks like a Bragg grating. If a third signal travels through the same fibre, then it will scatter light off the grating into a set of low-power sidebands at the same frequency as the third signal and at frequencies given by the difference between the signal frequencies taken three at a time. The low-power sidebands are phase-coherent with the main signals, so even a small sideband creates a large amount of interference. Since the interfering sidebands are at the same frequency as the signal, filtering does not help.

Cross-Phase Modulation is a non-linear effect in which the phase of a signal is modulated by the changes in the index of refraction caused by all the other signals in the system. As in FWM, the changes in the index of refraction look like weak Bragg gratings. The phase modulation is converted to amplitude modulation by the dispersion in the fibre, and the amplitude modulation looks like noise on the signal.

Self-Phase Modulation is the same as XPM except that the signal causes its own phase modulation. In DWDM systems SPM is typically much smaller than XPM because XPM is depends on the number of neighbouring channels. Fig. 4 shows how these effects depend on dispersion.

Since linear dispersion may be compensated but in general non-linear effects may not, a good signal conditioning system design must focus on managing the non-linear effects.

The main tool for managing non-linearities is the signal power. The idea is to keep the signal power high enough that a receiver can extract the signal, but low enough that non-linearities are minimised. This requires a careful trade-off, and is essential to good system design.

Polarisation mode dispersion can also be a problem at lower rates such as 2.5Gbit/s for older fibres and for distances longer than about 100km. PMD is a problem at both 10Gbit/s and 2.5Gbit/s, but since the penalty is smaller at 2.5Gbit/s, it is usually less of a problem. PMD is generally higher in older fibres. Most fibres installed after the early 1990s has extremely low PMD and is generally not a problem, even at 10Gbit/s. For some older fibre, the PMD is high enough that the PMD values can start to cause problems at long distances even at 2.5Gbit/s.

Ripple A main contributors to the problem of ASE is amplifier "ripple" across the spectrum of wavelengths. Ideally, you would like your system to treat each wavelength equally, but this presumes that amplification affects each wavelength equally. Another common problem, tilt, means there is an overall flatness shift. Ripple is a higher level of deviation-distortion. Ripple is defined as the fine-scale variations in power on top of the tilt (see below). Typically it is measured as the peak-to-peak power variation of channels from the average value. Ripple is not as easy to control as tilt because it needs essentially one VOA per channel. Once the tilt is corrected, ripple is the main reason that the spectrum is not flat.

Tilt is defined as the average power difference between the highest- and lowest-power channels in a DWDM system. If a spectrum is plotted as power versus wavelength, the tilt corresponds to the slope of a least-squares fit to the spectrum. A tilt value of 0dB corresponds to a completely flat spectrum. Most modern gain-flattened amplifiers control tilt by incorporating a variable optical attenuator (VOA) into the design of the amplifier to match the net gain of the amplifier and the VOA to the loss of the fibre (see Fig. 5)

In practice, amplifiers are not perfectly flat, so they do not treat all of the channels in the same way. Amplifiers are not perfectly flat for several reasons: fabrication tolerances; pump power variations; and temperature variations — the physical parameters on which the design is based change with temperature. Precise temperature control can reduce variation, but not completely eliminate it.

The best solution for signal optimisation is to have a system that can adapt itself to the changing conditions of the system. One way to do that is to compensate for the different power levels by intentionally launching the channels with different powers at the input to the amplifier so that the output is flat.

The simplest method to intentionally launch channels with different powers is to control the output power of each laser. This is not the best method, because some of the operating characteristics of the lasers depend on the output power. Unless the power differences are small, this can result in side-effects.

A better method would be to have a variable optical attenuator (VOA) per channel, or perhaps per band of channels to reduce cost. Because ripple accumulates as amplifiers are added, it is necessary to control both the initial launch power and to correct the spectrum as it evolves down the amplifier chain.

In a real system this is best done under closed-loop control. The system measures the spectrum and then uses that information to control the channel powers to keep the spectrum flat. A closed-loop system will keep the spectrum flat as the channel count changes, as amplifier characteristics change with age, and as other parts of the system change — such as repairs to the transmission fibre.

To overcome the problem occurring at above 1000km it is necessary to use a dynamic gain equaliser (DGE). This consists of one area of attenuation per small region of the total channel. The error signal changes the relative attenuation of the channels (Fig. 6).

Incoming light from a fibre is wavelength-demultiplexed and passed through a lens to form a line focus. The strength of the reflection is modulated by some form of spatial light modulator — liquid crystals, MEMS mirrors, etc. The same principal may be used to construct an optical performance monitor (OPM) by replacing the array of liquid crystals with detectors.

As 10Gbit/s and 10 Gigiabit Ethernet becomes more widespread, DGE-type compensation will become more critical in many deployments. Ciena's amplifiers are already flat enough to support transmission distances up to 1600km. Distances beyond that are possible as well, but for operational reasons it begins to make sense to use active control.

One must also consider chromatic dispersion. This is data-rate dependent. At 2.5Gbit/s, it is generally not necessary to compensate for chromatic dispersion at distances shorter than ~1000km. At 10Gbit/s, it is necessary to compensate for dispersion for distances longer than 60km (on standard fibre). As 60km is comparable to most amplifier spacing, this means essentially that every span must be compensated to enable 10Gbit/s operation.

All the commercial players in this sector are developing dispersion compensating amplifiers. Dispersion compensation is typically done in two parts — adding specially designed fibre to the transmission fibre does the compensation.

The ratio in lengths is typically 5:1, i.e. it requires approximately 20km of special fibre to compensate for 100km of standard transmission fibre. The ratio in loss is approximately a factor of two in decibels, i.e. it requires 10dB of special fibre loss to compensate for 20dB of transmission fibre loss. Most amplifiers are designed to accommodate this additional loss by incorporating a third stage.

A new amplifier stage is necessary. A common architecture is to have a two-stage amplification process — low-noise front end then followed by the power amplifier. However, with dispersion compensating fibre, it is necessary to convert the amplifier to a three-stage amplifier.

You're not really adding an extra amplifier — you're modifying what you already have — to add less noise, with a sort of sandwich process. You don't cancel the noise; the noise is proportional to the gain. The notion is that if you have to add additional loss to compensate for the dispersion for say 100km span — typically loss of 25dB — that corresponds to a 300% amplification factor.

In this arrangement: a gain of 25dB corresponds to a linear amplification by a factor of 300; and a loss of 25dB corresponds to a linear attenuation by a factor of 300. The extra stage of the amplifier is needed to overcome the additional loss of the dispersion compensating fibre. The idea is that two stages are needed to get approximately 20-25 dB gain to overcome the insertion loss of the transmission fibre. Dispersion compensation fibre introduces an additional loss of 10dB; so one more amplification stage is needed.

Ciena does design and manufacture amplifiers, but will be gradually transitioning these functions to an outsourcing model. For gain flatness, the key idea is that we have a very low systematic ripple.

The systematic ripple is the part of the total ripple that accumulates linearly with the number of amplifiers. There is also a part that accumulates as the square root of the number of amplifiers, and the sum of the two represents the total ripple per amplifier. The idea here is that simply comparing the total ripple between two different vendors is not enough — you need to compare the systematic components only.

Another performance advantage is achieved with high gain forward error correction (FEC). This allows the system designer to achieve a given bit error rate (BER) with relatively poorer uncoded conditions. For example, if you have a BER specified to 10-15 then you can launch with 8dB less launch power than operating without the FEC. Here, Ciena offers a couple of decibels more advantage. The aim here is to drop the launch power to reduce the influence of the non-linear effects.

Reduced launch power means reduced non-linearities. The OSNR is directly proportional to the launch power, while non-linearities grow as the launch power squared. Therefore, a 3dB launch power reduction suppresses non-linearities by 6dB while reducing the OSNR by only 3dB. FEC provides the freedom to trade off the non-linear degradations against the OSNR requirements.

Receiver performance is measured by the OSNR and power required to achieve a specified BER. A more sensitive receiver achieves the same BER with either lower power at a fixed OSNR, lower OSNR at a fixed power, or both. In DWDM systems, power is relatively cheap because there are amplifiers everywhere — but OSNR is precious because each amplifier adds noise. In a metro application, just the opposite is true — OSNR's are typically high, but power is difficult to find because amplifiers add cost. So both metrics are important.

Conceptually the system designer is between a rock and a hard place. You need to launch enough power to keep the OSNR high enough to operate your receiver, but you need to keep the power low enough to keep the non-linearities under control. A good system design finds the right balance for the transmission span, channel count and data rate. With dispersion, making the total path averaged dispersion nearly zero can compensate the linear effects. Linear dispersion may be compensated, but non-linearities must be managed.

Variable or tunable compensation should be available by the end of 2002. Dispersion compensation can be likened to the function of a resistor; variable compensation is the corresponding potentiometer.

Many different vendors are developing this functionality. The distinction to be made here is that tunable dispersion compensation is something that you set and forget. You fix the value at system installation, but then do not change it after that except under special circumstances. This would be analogous to having a screwdriver (manual) adjust potentiometer. The tunable DC is very nice because you can cover a range of compensation values with a single device — resulting in inventory and installation cost savings.

Variable DC can be changed dynamically under remote control. This is analogous to a DAC. The DC value can be changed with signals present without disrupting service to respond to changing environmental conditions, path length changes, etc. It is something that will be needed for reconfigurable optical networks.

Sponsored Recommendations

March 12, 2025
Join us for an engaging discussion with industry experts on the intersection of AI and optics. Moderated by Sean Buckley, editor-in-chief of Lightwave+BTR, this panel will explore...
March 25, 2025
Explore how government initiatives and industry innovations are transforming rural broadband deployments, overcoming cost and logistical challenges to connect underserved areas...
March 10, 2025
The continual movement around artificial intelligence (AI) cluster environments is driving new sales of optical transceiver sales and the adoption of linear pluggable optics (...
Jan. 13, 2025
Join our webinar to explore how AI is transforming optical transceivers, data center networking, and Nvidia's GPU-driven architectures, unlocking new possibilities in speed, performance...