There are several hurdles to significant deployment of serial 100-Gbps technology, with the price and the technological complexity two of the most widely discussed. But one major hurdle that deserves more attention than it has received is the lack of installation test systems–or even agreement on what capabilities such instruments would require.
Constellation analysis is common in 100G R&D. How might it be used in the field? Source: EXFO
Which is not to say that no one is thinking about the problem. "The first test equipment that you will see for installation is BER [bit error rate] testers. But in my opinion, that won't be enough," offers Ildefonso Polo, director of product marketing, transport products at test equipment developer Sunrise Telecom, by way of example. "The information that the BER tester gets is filtered too much, to the point that it could prevent the user from seeing problems in the network. Because every error gets corrected… via the post-processing to clean up the signal. So you may not be able to see what's going on behind the curtains if test equipment doesn't change."
Ideally, Polo says, you'd want access to the signal before and after processing. That way, you'd know the actual condition of the network and how efficiently and close to their limits the forward error correction and coherent detection receiver are running. A test instrument's ability to make these measurements would hinge on how much information it could extract from the 100-Gbps transponder it was using to see the signal–which means test transponders would have to be designed to offer this information to the test equipment.
Of course, in the case of such modu-lation formats as dual-polarization quadrature phase-shift keying (DP-QPSK), the test instruments will have to evaluate both amplitude and phase. Optical modulation analyzers with constellation analysis capabilities fill this role in the lab. However, they're usually part of a multi-instrument evaluation suite that is too expensive and bulky in its current incarnation to meet expectations for field test equipment.
Daniel van der Weide, vice president of engineering at optical modu-lation analyzer vendor Optametra, told Lightwave at ECOC last September that his company has started to think about what role its technology might have in network installation applications. However, he didn't say where that thinking had led him and his charges so far.
Polo points out that cable operators faced a similar challenge with their QAM-based networks and managed to find a workable solution to the problem. The optical equipment and test communities will have to follow this lead before 100 Gbps will achieve more than boutique status in carrier networks. – Stephen Hardy
Google plays Cisco
Google's fiber to the home initiative made headlines last year, but it wasn't the company's only example of optical activism. Primarily through the auspices of Bikash Koley, a Google senior network architect who made the rounds of the major trade shows and conferences in 2010, the company also stumped for a 100-Gigabit Ethernet alternative to 100GBase-LR4. The 4x25-Gbps format and accompanying gearbox IC of the IEEE specification would make the resulting module unnecessarily expensive, Koley repeatedly asserted (a viewpoint that was memorably debated with Chris Cole of Finisar at ECOC in Italy last September).
The campaign reached its peak only recently with the announcement that Google would be a founding member of a new multi-source agreement (MSA) aimed at creating a 10x10-Gbps module for 100-Gigabit Ethernet applications. Brocade, JDSU, and Santur joined Google as MSA supporters. But how many other companies will join the MSA is a major question.
Google's success in getting the MSA created recalls the influence Cisco has had on optical module design. When faced with competing data module specifications in the past, companies checked for smoke signals from Cisco headquarters in San Jose before making a choice. Cisco and companies within its sphere of influence represented so much potential business that targeting the switch and router giant's needs made strategic sense.
The 10x10-Gbps MSA aims first for a CFP format, with smaller modules to follow. Source: Finisar
It's uncertain whether Google's support represents a similarly lucrative opportunity. Lower cost has universal appeal; JDSU told Lightwave via e-mail that "there are numerous network equipment manufacturers and service providers who have expressed interest" in the 10x10-Gbps format. But interest doesn't necessarily lead to sales, particularly among carriers, major enterprises, and others who are not often as iconoclastic as Google when it comes to ignoring the IEEE and the interoperability benefits the use of IEEE-compliant technology can bring. One could foresee carriers and others sitting on the fence until multiple module vendors proclaim their intent to develop devices compliant to the new MSA. Meanwhile, those same module suppliers might wait until customers say they'll buy such transponders before committing to expensive product development. And in this period of uncertainty, 100GBase-LR4 design work and cost-optimization will progress.
The time when module vendors could afford to speculate with development funds has long passed. How many of these suppliers will jump aboard the MSA will indicate whether the market has a new major catalyst. – SH
Trading on low-latency demand
The 2010 boom in low-latency network construction aimed to meet the requirements of financial houses. However, it seemed that low latency's 15 minutes of fame would expire as soon as carriers finished these builds.
Not surprisingly, equipment vendors now say that low latency benefits many applications besides trading. These include data center interconnectivity, cloud computing, gaming, video transmission, smart grids, health care, and grid computing among others.
Is this reality or a sales pitch? Many of these business services applications could ride over the same networks as financial traffic. So the new low-latency service providers in New York City (and Chicago, London, Frankfurt, etc.) should demonstrate whether low latency truly could drive new network deployments elsewhere. – Stephen Hardy