Efficiently testing dwdm transmission quality in sonet systems
Efficiently testing dwdm transmission quality in sonet systems
High-capacity dwdm systems present significant testing challenges to system manufacturers. There are several ways to improve testing accuracy?and economics.
Dana Cooperson Tektronix Inc.
The push is on for dense wavelength-division multiplexing (dwdm) product manufacturers to pack more wavelengths (channels) of light onto a fiber system. And driving this demand is the desire of carriers to more fully use the bandwidth of their fiber. Today`s dwdm systems can carry up to 40 channels of OC-48 (2.5-Gbit/sec) traffic simultaneously, for an aggregate capacity of 100 Gbits/sec. Systems with 80, 96, and even 128 channels have been announced.
Although dwdm systems can transport virtually any optically based client system that fits a broadly defined set of wavelength and bit-rate criteria, the most cost-effective client system in use is OC-48 Synchronous Optical Network (sonet) or its Synchronous Digital Hierarchy (sdh) counterpart, stm-16 (also 2.5 Gbits/sec). With the equivalent of 1.3 million voice calls traveling over a 40-channel OC-48-based dwdm system, transmission quality and system integrity are critical. Testing--in design, manufacturing, and installation--is needed to verify and ensure that transmission quality in the public network meets stringent performance criteria.
Economy is a primary issue in manufacturing testing, as test cost directly affects the manufactured cost of goods sold of every system. It is possible to dramatically cut manufacturing test time and, therefore, cost by (1) testing all dwdm channels of a system in parallel while (2) reducing the time required to test each channel. A later section of the article discusses techniques for lowering per-channel test times by accelerating low bit-error-rate (ber) measurement. Since manufacturers also must ensure that dwdm products will operate as specified when installed in the field and operating with older (legacy) sonet equipment, two test techniques for emulating field performance in the factory are discussed later.
Two types of manufacturing tests
Manufacturing typically requires two types of tests: transmission-quality tests and optical-layer tests. Some background on these tests will be helpful before discussing how to conduct dwdm testing more efficiently.
Transmission-quality testing verifies the error performance of the client signal. The format (protocol) of the data being transported by the dwdm system is integral to the testing. These tests are straightforward but require a lot of time to verify low error rates. For example, GR-2918, the Bellcore (Bell Communications Research--Morristown, NJ) specification that governs multichannel sonet transport systems, requires performance of 10-12 or better. At 2.488 Gbits/sec, 10-12 requires that no errors be received in 7 minutes of transmission. Many customers, however, expect performance better than 10-14, or no errors in more than 11 hours. Verifying such low error rates with certainty means that transmission-quality tests can dominate manufacturing testing.
Alternatively, optical-layer testing verifies the health of the multiwavelength optical signal without considering the format and content of the data being transported. Typical measurements are optical signal-to-noise ratio (osnr), multichannel gain tilt (a measure of optical power differences between channels), frequency drift, channel optical power, and total system optical power. These tests are not as straightforward as transmission-quality tests, but they require much less time to complete.
Today, there is no established correlation between transmission-quality and optical-layer measurements that applies under all practical conditions. If optical layer tests are foregone in manufacturing, the dwdm system cannot be guaranteed to operate to specification. The error performance of the data path also must be verified so that problems affecting field-service quality (and hence revenue) will be identified in manufacturing.
Theoretically, open dwdm systems, i.gif., systems designed to accept a wide variety of data formats, do not care what type of client signals they transport, as long as each input signal meets a few basic criteria. For example, if a wavelength falls between 1275 and 1575 nm, bit rate ranges between 140 Mbits/sec and 2.488 Gbits/sec, and the input power lies between -8 dBm and -17 dBm, a representative open dwdm system can handle it. The system should be protocol-independent and should be able to handle Asynchronous Transfer Mode, sonet, sdh, Internet protocol, and even legacy Plesiochronous Digital Hierarchy/asynchronous signals equally well. Experience shows, however, that systems are sensitive to protocol.
Even if the multichannel optical signal appears "healthy," meaning that the proper osnr, per-channel power, and so on are verified, acceptable transmission quality is not necessarily guaranteed. Conversely, if dwdm optical power is low and the system is in alarm, transmission quality will not necessarily be unacceptable.
Economical system testing
Again, verifying transmission quality with confidence to 10-14 ber requires a minimum of 11 hours, which is a lifetime in manufacturing testing. How, then, can we lower the cost of performance verification? Since time equals money in manufacturing, we want to verify guaranteed system performance with minimal test time. Increased test time equates either to lower throughput for a given capital/labor base or to higher capital and labor needs to maintain a required throughput. dwdm test time can be kept to a minimum in two main ways: by testing channels in parallel and reducing per-channel test time.
Testing channels in parallel: If dwdm channels are tested in series, test time rises proportionally with the number of channels. So as channel counts increase above eight or 16, system test time becomes unacceptably long. For example, if 11 hours are needed to test for no errors in 10-14 bits, then the following test times would apply.
An 8-channel system would require nearly 90 hours (approximately 4 days).
A 16-channel system would require more than 178 hours (7.5 days).
A 40-channel system would require almost 450 hours (18.5 days).
No manufacturer could make money tying up capital and inventory like that. In contrast, when all channels are tested simultaneously, test times remain constant as channel counts increase.
Of course, since nothing comes free, testing channels in parallel requires more capital equipment than testing in series. If a one-channel transmit-receive (transceiver) test setup costs $60,000 and economies of scale aren`t possible, then our 40-channel system test setup would cost $2.4 million--attractive to test-equipment manufacturers but a disaster for the manufacturing capital budget and manufactured cost of goods sold.
Is there any way to reap the benefits of parallel test without incurring its high capital costs? One method is to split a single transmitted test signal into multiple dwdm channels and put the capital where it`s really needed--at the receiver, where the data must be analyzed (see Fig. 1).
If we assume a typical receiver might cost half as much as the transceiver and that one optical input signal is split four ways before amplification is needed to meet minimum input power requirements, then we could lower our 40-channel-system test capital cost from $2.4 million to $1.5 million. This cost reduction is nearly 40% and retains the dramatic test-time savings of parallel channel-system testing. Test-floor managers and business-unit controllers appreciate cost reductions of this magnitude.
While parallel testing makes sense for verifying error performance, serial testing makes more sense for optical-layer testing. To efficiently use optical-layer test equipment such as optical spectrum analyzers--which can be bulky, expensive, delicate, and hard to keep calibrated--test systems can be designed to take advantage of optical switching and automation software that in turn switch the test equipment to each channel. Since each channel needs the equipment for only a brief time, this switching approach deploys these expensive assets efficiently.
Lowering per-channel test times: Manufacturers can use the following four methods to shorten the test time of each channel.
1. Characterize performance in design, verify performance in manufacturing. First and most importantly, test time may be lowered by testing only what is needed. If the product had been rigorously tested during development--module by module as well as during system integration--many parameters need not be tested during each production run. Characterization tests like jitter tolerance need not necessarily be conducted in each production run if jitter performance had been properly characterized during design.
Assuming a well-controlled manufacturing process, the system can be tested only for end-to-end error performance and other critical overall specifications. If error performance is satisfactory, then transmission-quality testing is complete. If a problem is found, error sectionalization techniques can help identify the location of the problem to a specific subsection or module, which can be tested further.
Characterization testing may migrate into manufacturing through the common practice of moving automated characterization software from the development lab to the production floor. This approach saves program development time in manufacturing, yet it can build unnecessary test time into each production run. It can also lead to unnecessary capital spending, since development test equipment is generally much more powerful and expensive than that needed for manufacturing. Test floor managers and business unit controllers frown on cost increases of this magnitude. To help control costs, error-performance test equipment should include drivers and other automation tools to create manufacturing test software quickly and efficiently.
2. Test at tributary rates only when needed. Another way to save test time and money is to use a tester dedicated to section and line test at the client system line rate of 2.488 Gbits/sec. Using lower rates, such as OC-12 (622 Mbits/sec), for system error performance testing is not recommended; at this rate, counting 10-14 bits would require more than 45 hours, versus 11 hours at OC-48. If needed for detailed sonet trouble sectionalization, lower-rate testers can be used to test at OC-48 tributary rates (e.g., OC-12 or OC-3 [155 Mbits/sec]) or to analyze path-level parameters such as pointer movements.
While all-in-one testers can be convenient, they also burden the per-channel capital cost unnecessarily. For cost-effectiveness, use tributary test equipment only where and when needed, much as previously pointed out with optical-layer test equipment.
3. Accelerate measurement of low error rates by extrapolation. A very effective way to improve per-channel test efficiency is to artificially stress the system so that error performance can be verified without waiting for quite so many bits to be present and accounted for. There are several ways to accelerate the measurement of low error performance through extrapolation.
Use "95% confidence" to bring down ber test time: To test for a 10-14 ber with certainty requires verifying that at least 10-15 bits (10 times the required maximum error rate) must be counted error-free. This requires 110 hours, or nearly 5 days, of testing. Practically speaking, a 95% confidence that a system meets a 10-14 ber objective requires that 3 ¥ 10-14 bits be counted without error. This test would take about 34 hours to complete. (Note: A practical engineer might further reason that no errors in 10-14 bits is good enough for manufacturing test--assuming a controlled process and rigorous characterization testing in design--and test only for the 11 hours required to transmit 10-14 bits.)
Attenuate the input signal: You can attenuate the input signal and plot a ber-versus-received optical power curve to extrapolate the error rate at the minimum sensitivity required by GR-2918. For example, GR-2918 requires error performance of 10-12 or better at -28 dBm input power. By attenuating the signal at the receiver beyond -28 dBm, error rates can be between 10-6 and 10-10, and the -28 dBm ber can be extrapolated rather than measured, saving at least an order of magnitude of test time.
Modify the test set`s receiver-decision threshold: A new acceleration method, describ ed in draft iec (International Electrotechnical Commission) and tia (Telecommunications Industry Association (Arlington, VA) documents (iec 61280-2-8 and ofstp-8, respectively), involves modifying the decision threshold of the optical receiver, i.gif., the signal level at which a digital "one" versus a digital "zero" is determined to have been received (see Fig. 2. See also Lightwave, May 1998, page 32). By raising and lowering the decision threshold, ber-versus-threshold values can be plotted and the optimal threshold and minimum ber of a system easily determined. With this method, error rates of 10-8 and higher are tested to extrapolate to 10-14. At 2.488 Gbits/sec, the method de scribed by the iec/tia may reduce the time needed to verify 10-14 ber with confidence by a factor of 100. Although the test setup described in the draft documents is somewhat cumbersome (the optical method requires optical splitters, an optical attenuator, and a DC light source in addition to the sonet tester), it is possible for a sonet tester to provide direct access to the receiver-decision threshold, making this procedure much easier to accomplish.
Inject an interfering sinusoid at the receiver: The draft iec and tia documents detail another method (either optical or electrical) of accelerating ber measurements. In it, a sine-wave generator is used to inject an interfering signal into the receiver. Then a ber-versus-sine wave amplitude curve can be plotted and the 10-14 error rate extrapolated. This method, which also requires equipment (e.g., a sine-wave generator, an optical attenuator, an interfering laser) in addition to the sonet tester, can lower overall test time, but does complicate the test process.
4. Improve test convenience. The final means of lowering per-channel test time is to make testing more convenient, which can be done in three ways:
Pause and restart tests for setup changes: Minimizing disruptions can improve test efficiency. Long tests may require that test gear or systems under test be reconnected. To reconnect without affecting overall test results and to increase throughput, use a tester with pause/resume functionality. With this feature, error measurement can be suspended for the time it takes to reconnect the system under test, and testing can resume as if nothing had happened. Thus, long-term error performance tests can be conducted without restarting the test at each setup change.
Restart tests after power failures: Similarly, a tester that can restart after a power outage without "losing its place" in the test can prevent considerable aggravation. Ask any test engineer who has left the factory for a relaxing weekend, expecting to run a 48-hour test before the "big order" is due to ship Monday morning, and returns only to find that the test died Friday night when the local power company burped.
Generate a signal to test osnr: Since osnr is typically part of manufacturing system testing, a ber tester that can be switched into constant wavelength mode to be used as a signal source for osnr or spectral-width test can be convenient for engineers who do not want to integrate yet another piece of equipment into their setup.
Emulating field behavior in the factory
Since open dwdm systems promise to increase the carried bandwidth of existing fiber systems, they must be able to work with a wide range of sonet systems, both old and new, from different manufacturers. And even if the dwdm system`s performance has been verified with a test set and found that it meets the required error performance criteria, you can`t assume that all will be well when the system is integrated with the legacy sonet systems that are diligently carrying network traffic.
Legacy sonet network elements can be sensitive to the amplified spontaneous emissions (ase) and signal spontaneous emission that characterize the optical amplifiers used in dwdm systems. Each amplifier introduces ase into the sonet system, decreasing the signal-to-noise ratio and leading to signal degradation (see Fig. 3).
When the optical signal is detected at the receiver, other features of optically amplified systems must be accounted for. The ber is determined differently in an optically amplified system than in a conventionally regenerated one. The probability of error in the latter is mainly determined by the amount of receiver noise. In a properly designed optically amplified system, the probability of error when receiving a binary value of one is determined by the signal mixing with the ase. The probability of error when receiving a binary value of zero, conversely, is determined by the ase noise value alone.
A test set can correct for this ase noise by optimizing its decision threshold to account for ase, as a new sonet network element would. Or it can emulate a legacy sonet system by allowing the user to adjust the decision threshold. It will then produce an error result that more closely predicts the behavior of older sonet systems. In addition to emulating legacy network elements, the adjustable threshold can enable accelerated ber measurements.
Emulating live traffic
To estimate error performance accurately under live traffic conditions, testing should be done with a signal that mimics live traffic. Consider this analogy: In the early days of T-carrier system deployment, some system problems would not surface before service was turned up, unless certain test patterns known to "stress" the system similarly to live traffic were used. Marginal performance of different network elements, e.g., multiplexers and regenerators, would manifest itself under different stresses.
With dwdm, different types of traffic stress the dwdm system differently. Patterns that emulate live traffic on that system type should be used to verify and guarantee required error performance. To emulate live-traffic transport on a sonet under test conditions, a Pseudo-Random Bit Sequence (prbs) is used. Meeting error performance objectives using 223-1 prbs payloads assures that the system will operate with the proper network-transmission quality.
Economic testing
To verify performance of their products in manufacturing, dwdm system suppliers must test both optical and transmission domain specifications. To improve test-floor economics, manufacturers can take advantage of the parallel nature of their products and test all channels simultaneously. Additionally, manufacturers can make per channel tests as efficient as possible by testing only what is necessary, by using optical test gear serially and accelerating the measurement of low ber rates. These simple techniques, along with the technology improvements realized by the dwdm manufacturers` design groups, continue to improve dwdm system economics and ensure continued growth of the market. u
Dana Cooperson is a marketing manager for Tektronix`s Microwave Logic product line in Chelmsford, MA. She can be contacted at [email protected]