What’s holding up 40G?
Despite physical impairments that scale exponentially with bit rate, 40-Gbit/sec transmission now is technologically feasible thanks to advanced modulation schemes and other developments, say industry insiders. The hurdle to widespread deployment remains economic, but some industry insiders wonder if perhaps we’ve turned that corner, too.
Until the mid-1990s, network capacity increased steadily by a factor of four every 5 or 6 years. By all accounts, the migration to 40G has occurred much slower than the norm, and part of that delay may be attributable to some of the technical challenges inherent in higher-speed transmission as well as the bursting of the telecom bubble, says Per Hansen, director of business development at ADVA Optical Networking (www.advaoptical.com). But he also points to the widespread acceptance of WDM technology as a contributing factor that may have “taken the pressure off 40 gig.”
However, an increase in data traffic-some carriers are reporting as much as 75% to 125% year-over-year growth-and the emergence of triple-play services necessitates the move to a higher bit rate. In a webinar sponsored by EXFO (www.exfo.com) entitled, “40 Gbits/sec: Higher Speeds, New Challenges,” senior product manager Francis Audet noted that the emergence of bandwidth-hungry applications requires bandwidth in the core to be greater than any single constituent signal from the edge. So the key driver is there, he says, noting that most Tier 1 operators today are looking into 40G. It's the business model that has caused some hesitation.
“I don’t think anyone wants to spend money on 40 gig because it’s cool,” admits Niall Robinson, vice president of product marketing at Mintera Corp. (www.mintera.com). “They want to spend money on 40-gig deployments because it saves them cost in their network.”
“The carriers are trying to get to this half-the-cost, twice-the-bit-rate kind of metric where a 40G transmitter would cost us two to two-and-a-half times what a 10G transmitter would cost us,” agrees Glenn Wellbrock, director of network technology development at Verizon Business (www.verizonbusiness.com). “But guys are really struggling to get there.”
The leap from 2.5- to 10-Gbit/sec transmission was inherently less difficult than the jump to 40 Gbits/sec, Wellbrock explains. “For the first time, we’re doing something smarter than just turning the light on and off. It took a ton of innovation to get from 2.5 to 10 [Gbits/sec]-it was the first time we had to worry about PMD [polarization-mode dispersion] and things like that-but we didn’t change how we did the transmitter and receiver,” he says. “We used direct modulation, but at 40 gig, that doesn’t work very well.”
Light is modulated 4× faster at 40 Gbits/sec versus 10 Gbits/sec, reports Audet, and therein lies the key challenge. The faster modulation increases the laser’s spectral response, which in turn necessitates that all the multiplexers, demultiplexers, and filters in the network be 4× larger. “Four times larger means there is four times as much noise coming into your system,” says Audet. “For similar power, that’s going to be about 6 dB less of OSNR [optical signal-to-noise ratio] on your transmitter, and that’s one of the big issues limiting 40G.”
Transmitter sensitivity scales exponentially with bit rate. With traditional non-return to zero (NRZ) modulation, chromatic dispersion (CD) would be 16× worse and PMD 4× worse at 40G than 10G, thereby greatly reducing transmission distance. “To have a good 40 gig,” says Audet, “we’ll need to go somewhere other than NRZ because of the OSNR issues, the CD, and PMD.”
Moreover, adds Kevin Drury, director of optical product and solutions marketing at Nortel (www.nortel.com), these new dispersion challenges necessitate additional dispersion compensating modules (DCMs) and amplifiers, which adversely affect the link budget and add cost to a 40-Gbit/sec system that must be just 2× or 2.5× the cost of a 10-Gbit/sec system.
According to Hansen, 40-Gbit/sec deployments to date have been in the backbone, where the technology today is more cost-effective. “There is a great sharing of costs because there are many signals that are being aggregated into the backbone,” he notes. “You can typically afford to spend more money to get more capacity at that part of the network. How rapidly it can move toward the edge will depend on how quickly it will become cost-effective.”
That said, Hansen explains that there may be scenarios in which a carrier would choose to deploy a 40G system for a given application even if the 10G system is less expensive. The carrier may have a router with a 40G interface, for example, and changing the interface would result in additional cost that would have to be folded into the business model for choosing 40G over 10G. Or, he says, there may be places where digging up the street to install a fiber “is so unappealing, [the carrier] is willing to live with a higher cost to increase [its] bandwidth.”
The folks at Mintera argue a different angle. Instead of weighing the economic disparity between 40G and 10G, they say, it may make more sense to look at the cost of 40G in the context of four wavelengths of 10G, a scheme currently employed by several carriers.
“People tend to compare apples and oranges when this question gets asked because they take the cheapest 10-gig transponder, which maybe has the capability of a few tens of kilometers, and say that’s the price basis against which you want to compare 40 gig,” explains Terry Unter, chief executive officer of Mintera. It is his contention that the deployments occurring today in the long haul and ultralong haul will help drive economies of scale that will result in a more rapid price reduction than the industry is seeing today for 10-Gbit/sec components. “We are now working on products that will be deployed in the next 12 to 18 months that will be cost-competitive in the more mature metro and metro core networks,” he adds.
The recent development of advanced modulation schemes also should help speed the deployment of 40G. Among the modulation schemes developed to mitigate the effects of dispersion at higher bit rates, differential phase-shift keying (DPSK) appears to be the most promising, says Audet. In this case, both the amplitude and the phase are modulated, resulting in an average power that is 3 dB higher than other schemes. “Because 40 gig has a 6-dB penalty, having a modulation scheme that gives us a free 3 dB is extremely interesting,” he muses. DPSK also is more robust on CD and PMD. Moreover, it features a spectral efficiency 2.5× wider than traditional NRZ-modulated signals, which enables a DPSK-modulated signal to transmit at 50-GHz WDM channel spacing. “We can have 50-GHz 10-gig and 40-gig transmitted on the same fiber,” says Audet.
This capability cannot be overrated, adds Unter, whose company uses both advanced modulation and “clever chromatic dispersion compensation” within its module to enable 50-GHz channel spacing. “This is a leading differentiator in the 40-gig space,” he says. “From an economic point of view, the position that we’ve taken as a company is that 40 Gbits/sec will only be economically viable for service providers if they can run it on the infrastructures they have already built for 10 gig,” he reports. “The challenge for us up until now has been to come out with a deployable, cost-effective solution that will obey 10-Gbit/sec design rules and run over a 10-Gbit/sec infrastructure so a service provider can add 40-gig services without investing any money to upgrade an installed infrastructure with amplifiers, fiber, and dispersion compensation that has already been installed for a 10-gig solution.”
Joe Lawrence, principal architect at Level 3 (www.level3.com), agrees that today’s price points remain too high, but he wonders how much of that is the result of the aforementioned technological hurdles. “It’s really the volumes,” he asserts. “What’s attractive about 10-GigE isn’t necessarily the technology. It’s that once a chip is manufactured, it gets shipped in volumes of millions, not in volumes of a few thousand.”
Serge Melle, vice president of technical marketing and business development at Infinera (www.infinera.com), agrees. The cost to interconnect router ports to WDM line systems at 40 Gbits/sec is still too high, he says. “The components for 40 gig have not come down nearly enough in cost for that to be a widespread, economically attractive alternative today,” he says, noting that 40-Gbit/sec components are still 6× to 7× more expensive than 10-Gbit/sec components.
To address this issue, Infinera has joined eight other vendors to establish the X40 multisource agreement (MSA) to develop a multirate 40-Gbit/sec optical transceiver that is only 2.5× more expensive than its 10-Gbit/sec counterpart. According to Infinera’s Vijay Vusirikala, the key goals of the MSA are the introduction of pluggability and reduction of power consumption. “That’s when we expect to see the tipping point to more widespread adoption of 40-gig interfaces,” he contends.
That said, even if volumes do drive down the cost of 40-Gbit/sec components, for some carriers-including Level 3-it may be a moot point. Lawrence reports that Level 3 already is operating its backbone links at speeds “in excess of eight [wavelengths] by 10 gig between city pairs.” As such, the carrier is keenly interested in the work of the recently formed IEEE 802.3 Higher Speed Study Group (HSSG), which has been tasked to evaluate demand for even higher bit rates, including the current front-runner 100-Gigabit Ethernet.
“We view that 100-GigE is very much going to be the right solution for us within our data infrastructure, but that doesn’t mean that we think 40 gig is dead,” he explains. “There are a number of customers of ours who might have a need for 40 gig, so I don’t think we’re going to say that isn’t something we’d support. We just don’t think that’s going to deliver the most economical solution.”
Editor’s Note:For more information about the recent groundswell of support for 100-GbE technology, see “Industry Debates Merits of 40G versus 100G,” in our January 2007 issue.