SERDES design is driven by standards, network applications
Serializers and deserializers (SERDES) are chips that help move data from the electrical to the optical domain, and back again. Designers of OC-192 SERDES devices must work with key specifications and system-level requirements driven by SONET standards and Optical Interface Forum standards.
The on and off ramps to an optical network ultimately involve the serialization and deserialization of digital data—vital functions at the core of the optical transponder function. These functions seem simple in concept yet are complex in realization.
FIGURE 1. A generic peer-to-peer optical transmission-link system from a high-level, client data enters transponder blocks from either direction and moves from electrical to optical for transport over a fiber. The first sub-block performs SONET/SDH framing/ deframing, and any required forward-error correction (FEC) functions. The next contains the SERDES—with the serializer often functioning as a multiplexer/CMU, and the deserializer performing the demultiplexer/ CDR functions.
The key drivers for these serializer/deserializer (SERDES) chips for OC-192 (9.953 Gbit/s) optical-line rates are based on a range of system requirements, including possible SONET compliance and the Optical Interface Forum's common electrical interface standard between SONET/SDH framing devices and SERDES chips (the OIF99.102.8 specification).
SYSTEM IS KING
System needs determine the basis for a SERDES design. Looking at a system from a high level, client data enters a generic peer-to-peer link at OC-192, or some lower rate (see Fig. 1). The input block prepares client data to go from the electrical to the optical domain for transport over a fiber. At the other end, the output block reverses this process. Transponder peers perform both functions providing a bidirectional link.
Going one level down, the input and output block in each transponder are further broken down to reveal three sub-blocks. On the client side, the first sub-block performs SONET/SDH framing/deframing, and any required performance monitoring or forward-error correction (FEC) functions. This function is typically implemented by one or more application-specific integrated circuits (ASICs). In the case in which the FEC is required, it is applied between the data framer and the transmission link.
The next sub-block contains the SERDES function—with the serializer often functioning as a multiplexer/clock multiplication unit (CMU), and the deserializer performing the demultiplexer/clock and data recovery (CDR) function.
A key spec for high-speed SERDES is the Optical Interface Forum's OIF99.102.8, also known as the SFI-4 (SERDES-to-framer interface) specification. Intended for STS-192/STM-64 interfaces between framer or FEC ASICs and SERDES devices, this spec is generally applied to SERDES devices for 9.953- to 10.709-Gbit/s rates.
The SERDES design is tied to its intended application, which is often a function of reach—the distance between transponders on the optical link. Definitions for various applications based on peer-to-peer reach are integrated into International Telecommunication Union (ITU)-T and SONET specifications (see Table 1).
No longer relegated to only long-haul applications, OC-192 is commonly found in everything down to backplane interfaces between adjoining racks of equipment—as evidenced by the very-short-reach application noted in the table. OC-192 has emerged as the channel rate of choice for optical links, with DWDM leading the way to multichannel implementations.
Reach requirements provide the basis for system jitter (that is, the variation of a signal with respect to an ideal reference signal) requirements including jitter generation, jitter transfer, and jitter tolerance. These requirements are usually defined in terms of SONET compliance. In turn, reach requirements influence the decision to use forward-error-correction (FEC) technology, which in turn determines the overhead requirements and carrying capacity of channels.
Without FEC, a link will utilize a standard OC-192 rate of 9.953 Gbit/s. In contrast, implementing ITU G.975 FEC adds 7% of overhead (for a total of 10.664 Gbit/s), while G.709 "Digital Wrapper" FEC needs a total of 10.709 Gbit/s.
Advanced high-gain FEC requires 15% to 25% of overhead.
As reach requirements change, so do the applications and choices of FEC technology, adherence to SONET jitter compliance, the preferred data-modulation scheme, and the peer-to-peer transmission rate (see Table 2). These key system level requirements that drive a SERDES design may be joined with needs for low power, small size, and low cost as well. Very short reach OC-192 pplications generally use nonreturn to zero (NRZ) modulation, do not require SONET jitter compliance or FEC, and the data rate is a standard SONET/SDH data-generation rate (for example, no overhead). As reach increases above 2 km, SONET compliance becomes crucial to functionality.
The next transition takes place as the reach approaches 65 km, when systems often begin implementing some sort of FEC and correspondingly require higher transmission rates. Ultimately, at reaches above 100 km, we see advanced FEC techniques applied, as well as return-to-zero (RZ) modulation (see "Deconstructing the RZ vs. NRZ option," p. 94). At this point, the client side will be at the standard OC-192 rate or possibly a G.975/709 FEC rate, while the optical side will be a proprietary link running at a higher data rate (for example, 11.5 to 12.5 Gbit/s).
The Optical Interface Forum OIF99.102.8 specification, otherwise referred to as SFI-4 (SERDES-to-framer interface), defines the interface between the framer or FEC ASIC and the SERDES, which may be one chip (a "transceiver") or separate multiplexer and demultiplexer devices.
On the transmit side of the link, the framer ASIC presents 16-bit parallel data (differential signals) and a differential source-synchronous clock to the OC-192 serializer on a low-voltage differential swing (LVDS) interface. The interface supports a "counter-clocking" scheme in which the SERDES uses its high-speed clock, divided by 16 to 622.08 MHz (for 9.953 Gbit/s rates), to provide a clock backup stream (counter-clock) to the ASIC, for generation of the parallel data-write timing.
On the receive side, recovered parallel data and clock (divided by 16) are presented from the deserializer to the framer ASIC on a LVDS interface. A low-jitter, 622.08-MHz, board-level reference clock is provided as the timebase for the SERDES. A few other signals are defined but are immaterial to our discussion.
Voltage swings are also defined by SFI-4, as well as rise times for data, clocks, and alarm signals. For data and low-speed clocks LVDS is used. The reference-clock inputs are low-voltage positive-referenced emitter-coupled logic (LV-PECL,) and all control signals are low-voltage transistor-transistor logic (LV-TTL). The SFI-4 specification also defines timing requirements for individual interfaces.
Other than that, the SFI-4 spec defines a number of alarms and error (status) indication signals. The spec is fairly simple, defining about 90% of the interface—systems are typically implemented with a bit more complexity. For instance, some manufacturers add the capability to turn clocks on and off, or select different reference clock rates.
MUX/ CLOCK MULTIPLIER UNIT
An example of an OC-192 multiplexer/CMU is used on the transmission side of an optical link (see Fig. 2). After processing the client data, a framer ASIC presents 16 x 622.08-Mbit/s parallel data to the LVDS input receivers. The data is clocked into the mux data input latches using a data-write clock provided by the framer. This parallel data is buffered in a FIFO (first in, first out) with 8 to 12 bits of depth, typically.
Two clock domains are employed in the example mux/CMU—one clocks in the low-speed parallel data, as described above, and the other is used as a pristine reference to generate the high-speed serial-data output. The FIFO enables the two clock domains to maintain phase independence, including the effect of any jitter exhibited on the framer parallel data-write clock.
The board-level reference clock is frequency multiplied by 16 to the 9.953-Gbit/s serial-data rate using a clock muliplier unit. The high-speed clock, generated by the CMU, is used to generate all of the clocking used to read data out of the FIFO and multiplex the data into a serial-data stream. The serial data leaving the mux is retimed through a final flip-flop to minimize jitter generation. Finally, high-speed serial data is passed through a current-mode-logic (CML) differential output driver designed to drive a 50-ohm transmission line.
The SFI-4 specifies a board-level reference of 622.08 MHz for the interface, yet some systems still use a slower 155.52 MHz clock. Better performance can be achieved with a higher-speed reference since inherent jitter or noise is multiplied by a smaller factor with a higher-speed clock. The bottom line: tight jitter control in the reference clock is critical for SONET jitter performance.
Data output jitter—the sum of jitter components at the output—is the most critical specification for a mux/CMU device. At the system level, SONET requires a maximum in-band jitter (50 kHz to 80 MHz) of 0.09 UIs (unit intervals) of one clock period (100 ps at 10 Gbit/s). This spec is for overall system jitter, so the net contribution from both the optical and electrical components must meet this specification. The high-speed clock output, if required, must also be low jitter for driving retimed laser drivers and RZ modulators.
DEMUX/CLOCK AND DATA RECOVERY UNIT
On the receiver side, serial data must be returned to a 16-bit wide parallel format. The serial-data signal output out of the optical receiver interfaces directly to a demultiplexer/CDR (see Fig. 3). At this point, the signal is either single-ended or differential, and can be a very weak signal (less than 10 mVpp, or it could be as high as 1 Vpp).
With such low voltage swing signals, it becomes a significant challenge to latch "ones" and "zeros" in an error-free manor. Thus, input sensitivity, the lowest signal swing at which the demux can pass error-free data, is a key parameter for the demux/CDR device.
With return-to-zero modulation, the signal must next be adjusted for phase and voltage before it is fed into the sampling logic. Most nonreturn-to-zero systems do not need phase or voltage adjustment, but many high-end systems include at least a voltage offset adjustment capability to extend error-free performance another 1 or 2 dB.
A board-level divide-by-16 (622.08-MHz) reference clock is provided to the internal clock recovery unit. The CRU recovers the clock from the incoming data. The recovered high-speed clock is used to latch the data and drive the demux logic.
Before the serial-data signal is latched, it is passed through a limiting amplifier, which provides anywhere from 30 to 40 dB of gain. After the data is latched, it is demultiplexed into a 16-wide data path. The parallel data is latched for resynchronization with a divide-by-16 version of the recovered high-speed clock, and output from the chip via LVDS output drivers to the framer or FEC ASIC. The divide-by-16 (622.08-MHz) output clock is used as a write clock by the ASIC. The board-level reference clock for the demux/CDR is not subject to the same jitter requirements as for the mux/CMU because the goal is simply to lock to the frequency based on the average edges on the signal.
Besides input sensitivity, another key spec for the demux/CDR is jitter tolerance. The received data signal will have jitter accumulated from passing through multiple stages along the path. Jitter tolerance is defined as the ability to recover error-free data in the presence of jitter.
SONET defines the minimum jitter tolerance required for a receiver CDR as a function of frequency (see Fig. 4). Jitter is specified in terms of the maximum peak-to-peak phase deviation from a known reference relative to a unit time interval (UIp-p). In the case of an
OC-192 signal of 9.953 Gbit/s, one unit time interval is approximately 100 ps. For example, Fig. 4 shows that a SONET-compliant receiver must tolerate (maintain error-free operation) in the presence of a received signal that has up to 1.5 UIp-p (or 150% of a unit interval) of jitter within a band from 24 to 400 kHz. Fig. 4 shows that, as the frequency of the jitter content goes up, the peak-to-peak jitter tolerance specification of the data recovery unit goes to 0.15 UIp-p. This graph is known as the "SONET jitter-tolerance mask." So input sensitivity, coupled with jitter tolerance, make up the two most important performance specs of a demux/CDR.
Lee Walter is product marketing manager for Vitesse Semiconductor, 4323 ArrowsWest Drive, Colorado Springs, CO 80907. He can be reached at firstname.lastname@example.org.
Deconstructing the RZ vs. NRZ option
A high-speed data stream can be carried on an optical link using one of two data-modulation formats—nonreturn-to-zero (NRZ) or return-to-zero (RZ). The most common format is NRZ, but for ultralong-haul applications with spans well over 100 km, RZ modulation is typically used because it provides better tolerance in poor signal-to-noise environments where the optical signal is degraded by long-haul attenuation and optical dispersion.
With NRZ modulation, serial-data signal transitions occur at the beginning and end of the bit period, only when the opposite logical value ("one" or "zero") is transmitted. A "one" is represented by the signal in a high state (for example, laser on) for the whole bit-period, less the time required for signal rise and fall. Consecutive one's will remain in the previous high state. Thus, there is no "return to zero" during each consecutive logical "one" bit period. Binary zeros are represented as a low signal (for example, laser off) and consecutive zeros remain low.
Conversely, with an RZ signal, "one" is represented as a signal pulse rising to logic high (for example, laser on) at the start of the period, then returning to logic low (for example, laser off) by the second half of the period (see figure). Thus the signal "returns to zero" on each logical one bit period. Like, NRZ, in RZ modulation, a binary zero is represented as a low signal (for example, laser off) and consecutive zeros remain low.
Given identical clock rates, the pulse width is much narrower for a return-to-zero signal than for a nonreturn-to-zero signal, making it more difficult to generate and detect. Signal edge must be tightly controlled, making RZ modulation more difficult and costly to design and implement.
Dealing with RZ signals involves other complexities, including optics and modulation issues. On the other hand, an RZ signal provides better performance in noisy environments, making it a popular choice for ultralong-haul applications like undersea links.
From the SERDES designer's point-of-view, RZ has other complexities as well. On the transmit side, the time-based phase-relationship between the high-speed data and clock output, over temperature and supply voltage variations, must be stable and tightly controlled. This is because the high-speed clock output from the SERDES is used to drive an external modulator that creates the RZ data pulses. The mux generates NRZ data and clock, and then feeds the modulator. The clock is used essentially to "chop" the NRZ data into RZ data. The modulator is typically implemented external to the mux component.
If we look at RZ modulation implementation on the receiver side, we see a different challenge. Once an RZ signal has traversed the fiber, received pulses will emerge attenuated and somewhat distorted, with optical noise on the peak portion of the pulse. In most RZ applications, the electrical signal is usually fed through a preprocessing equalization filter, which flattens out the RZ pulse somewhat, essentially turning it into a quasi-NRZ signal.
With RZ modulation, the center of the data eye is more complex to target, with the time center shifted toward the start of the period. Also, the voltage threshold must be lower than the midpoint between peak voltage and zero to optimize detection. Essentially, the requirements for the RZ deserializer are more specialized, requiring an adjustment of both the sampling phase (the time component of sampling) and the voltage-decision threshold.
SERDES engineers face the RZ vs. NRZ modulation question today, and the issue will increase in importance, as developers increasingly demand the capability to deploy systems that can implement RZ modulation.