Vittorio Bertogalli, Edgar Leckel, and Ulrich Wagemann
The intrinsic compromise between the quickness of a measurement and its accuracy is heightened when using swept-wavelength test systems on DWDM components. Performance can be improved with precautions and post-processing steps that address dynamic system limitations and side-effects that reduce measurement accuracy.The method of choice for testing passive DWDM components over wavelength is a tunable laser source operated in swept mode, together with a broadband receiver (in most cases an optical-power meter). Besides high resolution and accuracy, this approach enables engineers to test multiple channels or devices in parallel—a key technique to achieve highest throughput in manufacturing (see Fig. 1).
In any time-dependent system, measuring quantities usually means having to compromise between the speed of the measurement and its accuracy. So it isn't a surprise that extra care must be taken when interpreting results of high-speed, swept-wavelength measurement systems in order to keep misinterpretation of results to a minimum.
SWEPT-WAVELENGTH SYSTEM TRAITSThere are different ways to measure the loss of a device over wavelength. In a stepped measurement system, wavelength is changed in discrete steps and associated power values are recorded. Another approach is through the use of a mode-hop-free tunable laser source, which can also sweep over a range of wavelengths. In such a case the output wavelength is continuously tuned, preferably at a constant rate over time. The latter method is especially attractive for manufacturing systems because it offers the quickest results. As the values of received power and wavelength are measured, it is relatively straightforward to compute the loss values simultaneously.To achieve a synchronization of the associated power-wavelength data, a trigger line with negligible inherent transmission delays is typically used. An in-depth analysis of the entire system will uncover possible sources for measurement errors. This analysis is valid for all types of swept-wavelength measurement systems, independent of their individual implementations.
For our discussion, it helps to separate system inaccuracies into two different groups. The first group can be classed as static inaccuracies, while the second group is best described—not surprisingly—as dynamic inaccuracies, which occur in the course of a swept operation.
A few parameters that determine static inaccuracies at the source side (on the wavelength axis) include accuracy, resolution, and repeatability, as well as stability and repeatability (on the power axis).
At the receiver (power meter) side, the list of static inaccuracies can include power linearity and total uncertainty. Dynamic inaccuracies may consist of effects coming from signal delays, signal distortions, integration times, and trigger delays. In general, the speed of measurement and its achievable accuracy are intertwined here.Apart from the case of simple trigger delays, all dynamic uncertainties or irregularities can be traced back to the same origin. In short, any measured signal—such as power or wavelength—needs to be "conditioned" before being sampled and digitized for further evaluation. As the name suggests, conditioning is the process of suppressing noise (this is especially necessary when large dynamic ranges are measured), as well as the removal of signal components above the Nyquist frequency limit—otherwise known as anti-aliasing. For noise filtering, in most cases, high frequencies are simply cut off. Conditioning is carried out through one or more analog electronic filters; additional digital filters such as digital averaging might also be present for the purpose of further noise suppression.
Because faster sweep speeds increase the frequency of the signal to be measured, they generally require a higher system bandwidth as well. If the condition is not met, then differences in traces recorded at different sweep speeds will probably occur.
It is important to underline at this point that, in general, a swept system is characterized by a transfer function, which describes the setup's impact on the measurement result; and this "real-world-aspect" is valid for our discussions of high-speed measurements as well. An input function (here representing insertion loss) gets distorted by a transfer function, F, which might affect both phase and amplitude (see Fig. 2).
As mentioned, analog filters might be needed to remove a noise floor from the signal; this is an especially difficult task for low-intensity signals. Despite the desired noise suppression being achieved, other unwanted side effects of analog filtering, such as attenuation of signal, time delay, and shape distortion, can lead to the misinterpretation of recorded data.
For most test engineers, "throughput" is a key consideration. With a given measurement system and device under test, minimizing the measurement time required would increase throughput. However, measurement accuracy affects throughput as well: high accuracy allows for lower test margins, which has a positive impact on manufacturing yield. When these parameters become coupled, the optimization problem becomes more complex. Moreover, the level of this interaction strongly depends on the type of device, making things even more complex. In other words, the equation
Throughput = f(measurement speed)
should be replaced by
Throughput = f(measurement speed, device under test, dynamic uncertainties, ...)
LOOK FOR DISTORTIONS
When investigating one of the most widely used narrow-band devices, a fiber Bragg grating, and looking for dependencies between measurement speed and accuracy, it is indispensable to use test equipment with minimum static errors to be able to differentiate errors resulting from dynamic performance.
A maximum deviation between different traces, up to 5 pm, can be detected at the highest tuning speeds (see Fig. 3). This deviation essentially shows up as a wavelength shift.
It is fair to say that current test equipment meets current test requirements, even if a deviation might slightly affect calculation routines for determining a central wavelength. But it is also clear that, for passive components with decreasing channel spacing or steeper spectral characteristics, more problems might occur.
ESTIMATING DISTORTIONSAlthough DWDM systems have made huge steps forward, neither systems nor components have reached their limits. A migration towards channel spacing narrower than 100 GHz in future long-haul systems and ultranarrow devices might play a significant role in wavelength-based services, especially in the metro and access categories.An acetylene gas cell can be used to emulate distortions for future systems. Acetylene is a well-known material for wavelength referencing and shows narrow and well-defined absorption peaks in the 1550-nm region (see Fig. 4). The steepness of the slopes of an absorption peak can be estimated to 0.7 dB/pm, and the FWHM is about 35 pm. This corresponds to a system with roughly 5-Ghz channel spacing.
Unfortunately, the dynamic range of the absorption peaks is limited to some 15 dB. As high-performance passive components, like fiber Bragg gratings or dielectric filters, can show up with a dynamic range exceeding 65 dB, an attenuator is added to reduce signal intensity artificially. By reducing the device input power, it is possible to demonstrate the joint effects of analog filtering versus tuning speeds at lower signal-power levels (as in the notch of a fiber Bragg grating). In a passband with high signal level, these effects do not show up significantly.The experimental results show both effects, from signal delay and signal distortion, as functions of measurement speed. Both effects become stronger as the speed of measurement is increased.
As these interdependencies are inherent properties of any swept-wavelength measurement system, it is important to consider ways to overcome these effects, for example, by varying sweep speeds and checking for the speed limit of a given setup, if any. Of course, this is not a satisfying solution because decreasing sweep speed comes hand in hand with increasing test time.
COMPENSATING FOR DISTORTIONS
One must identify the solutions available to suppress or reduce these effects before looking for the optimal compromise in measurement speed. Although the presence of filters is unavoidable, some of their side effects can be minimized. Phase delays in most cases just delay the signal without suppressing noise. Since the mathematics of those filters are well-known, the idea of compensating by post-processing makes sense.
Digital post-processing of data can be performed, in general, as long as the filter response of a system is known. As an example, compare the fiber Bragg grating results from Figure 3: after compensating numerically for the filter group delay (as performed automatically by measurement-library functions), no residual impact of sweep speed on wavelength accuracy can be noticed (see Fig. 5). In other words, with some corrective actions it becomes possible to increase measurement speed on current test equipment without sacrificing accuracy.
For more stringent wavelength-selective devices (emulated by the gas cell), a simple delay correction can turn out to be insufficient, as was the case here. A more complex compensation of the filter characteristics (its phase delay, and partially its gain, as a function of frequency) is needed to obtain good results (see Fig. 6).
Along the wavelength axis, distortions are almost absent, and along the amplitude axis the results for varying sweep speeds are minimum. In fact, the main limiting factor in resolving the peak minimum when using high speeds seems to be just the sampling frequency of the receiver.
Filtering is essentially unavoidable to obtain accurate results, and in swept setups it sets a limit to the maximum acceptable speed. The errors depend on the dynamics of the device under test and may range from wavelength offsets to more complex combinations of delays and distortions. Post-processing can be used to decouple measurement speed and accuracy, to a significant extent.
Vittorio Bertogalli is dynamic systems engineer, Edgar Leckel is R&D section manager, and Ulrich Wagemann is product manager for Agilent Technologies, Optical Communication Measurement Division, Herrenberger Str. 130, 71034 B