Pressure points for 2015
An uneven 2014 is coming to an end. Here's a look at what 2015 may bring in the way of optical communications technology trends.
An uneven 2014 is coming to an end. Here's a look at what 2015 may bring in the way of optical communications technology trends.
In many ways, 2014 was the year of the data center. Data centers became the hot new market area for optical communications for two reasons: Interest in fiber has grown significantly there, and data-center technology cycles churn more rapidly than in carrier networks.
The data-center niche established itself as the epicenter - or at least the inspiration - of innovation. It's where development of software-defined networking (SDN) and network functions virtualization (NFV) began. Most silicon photonics efforts target data-center requirements. The mega data centers have emerged as a potential market for photonic switches.
About the only technology players perhaps not entirely intrigued by the data center were the standards developers. When the target application is the carrier environment, it can take three to five years to create a new standard. But network managers of big data centers don't have that kind of patience. And they don't require the imprimatur of the IEEE or the ITU-T on their technology choices. So 2014 also saw a bumper crop of multisource agreements (MSAs) for various non-standard Ethernet applications.
This upheaval promises an interesting 2015. As is the tradition here at Lightwave, I'll close our 2014 coverage with an outline of the technical topics I believe will prove important over the next 12 months. The discussion will encompass five application areas:
1. Networking (except for the last mile)
2. Fiber to the X
3. Trends specific to cable operators
4. Test and measurement
5. Equipment design
The following predictions derive from basic research, conversations at trade shows, phone interviews, and the arrangements of the leaves at the bottom of my mug when the teabag split open one morning last week. As always, actual outcomes and results may differ materially from what is expressed or forecasted. No wagering allowed.
Defining SDN for optical networks
The next 12 months will see transport SDN and NFV kick into a higher gear. In particular, anticipated Tier 1 operator field trials should indicate how SDN/NFV concepts will be deployed - and what effect they'll have on optical communications networks.
The increased clarity will be welcome, because transport SDN/NFV development so far has been rather chaotic. This scenario probably shouldn't come as a surprise, given the stated goal of flexibility and an open environment. Anyone potentially can contribute something - and, so far, it appears almost everyone has tried. An alphabet soup of standards bodies (ONF, OIF, IETF, ETSI, etc.), third parties (such as the Open Networking Lab), industry groups such as Open Daylight, and several individual vendors all have tried to stake a claim to some corner of SDN/NFV or assert their expertise. That means carriers will have a lot of options from which to choose. With luck, enabling SDN/NFV capabilities and creating new apps will be as straightforward as working with LINUX - eventually. (Oh, that's right...Open Daylight is leveraging LINUX already...)
Clearly, there won't be a single path to SDN/NFV. But look for Tier 1 carriers to join China Telecom and implement these principles soon - including, in some cases, 2015 - as others have done already.
Meanwhile, as has already been forecasted by several market research firms, 2015 will begin the deployment of metro-focused 100G. As was the case in long haul, vendors with in-house DSP expertise will hit the market first. With pluggable coherent CFPs available now and coherent CFP2 devices following in 2015, those systems houses dependent on their module partners will be online starting late in the year.
Web 2.0 providers have joined traditional service providers as target markets for mainstream optical transport equipment, and their requirements will help shape metro-focused optical communications technology. Ciena and Infinera, among others, asserted in 2014 that the influence of such companies in the metro creates a demand for new classes of packet and optical systems, rather than just footprint- and cost-optimized versions of long haul equipment. Other companies that have focused on the metro have made similar assertions previously. Regardless of whether you're a traditional carrier, cable operator, "new wave" data-center connectivity seeker, or content service provider, there'll be plenty of optical transport options available for metro/regional requirements. And many of these systems will feature encryption capabilities.
While 100G invades the metro, technology options for supporting even greater speeds will become increasingly available. Systems that support 200G and 100G via the same linecard have reached the field from at least one vendor; we can expect to see more in 2015. This sets the stage for 400G. Yet while I expect we may see another deployment or two in 2015, the main action here won't occur until later years. Most of what we'll see in 2015 is preparation for future 400G deployments, as carriers ensure that they have the right flexible grid ROADM and amplification technology in place (or at least on standby). Remote optically powered amplifiers (ROPAs) may become more ubiquitous. And we'll hear discussion of the fiber requirements for coherent networks that require ROPAs and Raman as well.
As 100G becomes more ubiquitous to link data centers together, technology to support the same data rates within these data centers (or among buildings in a data-center campus) should finally see more than token deployment in 2015 with higher-end users, thanks to the availability next year of CFP4- and QSFP28-enabled ports. Still, 10 and 40 Gigabit Ethernet technology will remain more popular with the majority of data centers that don't have mega-scale requirements.
How fast is G.fast?
Sorry, fiber to the home fans. The most influential FTTx technology in 2015 is going to be G.fast, which is designed to bring "fiber-like" capabilities to copper in the last mile (or, more precisely, the last couple of hundred meters or so).
With the gigabit broadband era clearly upon us, G.fast backers promise the technology will fit right in. But until we see the results of the field trials that 2015 will bring, the distance/copper quality/user number conditions under which G.fast consistently can support 1-Gbps rates remain unclear.
In fact, the definition of "gigabit speeds" is somewhat unclear, since G.fast transmission estimates generally aggregate downstream and upstream capacity. In terms of consistent throughput, a G.fast connection will have to support greater than 1 Gbps to match a downstream 1-Gbps fiber connection - and the first generation of G.fast technology doesn't look like it will support symmetrical 1-Gbps transmission at all.
Nevertheless, initial trials conducted this year have demonstrated connections with greater than 1-Gbps capacity, so it's not out of the question that copper-loving carriers will use G.fast to tout "up to 1 Gbps" services. And in scenarios where the competitive environment doesn't require more than 100 to 500 Mbps - or where fiber connections aren't practical - G.fast will prove very attractive.
All of this is not to say that FTTH technology development will stand still. The first time and wavelength division multiplexed PON (TWDM-PON) products that support a total capacity of 40 Gbps should reach the market early in 2015, and we've seen announcements of 100-Gbps TWDM-PON prototypes as well. The first skirmishes in the battle between TWDM-PON and 10G PON for post GPON/EPON supremacy therefore could begin as early as the end of the year.
Otherwise, we're likely to see more evolutionary than revolutionary technology advances in 2015.
Cable MSOs target 1 Gbps
With competitors such as Google Fiber and CenturyLink offering 1-Gbps services via their FTTH networks, cable operators have started to respond. Some have already deployed FTTH themselves. Others are awaiting the arrival of DOCSIS 3.1 technology, which should generate the most buzz in the space in 2015.
DOCSIS 3.1 promises to support a shared 10 Gbps downstream, certainly enough to keep up with the average cable MSO's competitors if the number of subscribers per node is kept relatively low. The traditional weakness in DOCSIS and hybrid fiber-coax infrastructure has resided in the upstream. For now, most DOCSIS 3.1 technology vendors quote an upstream target of 1 to 2 Gbps.
The first wave of DOCSIS 3.1-related products surfaced this fall; trials should begin soon, and CableLabs product certifications should begin to appear in the first half of 2015.
Meanwhile, CableLabs is hard at work on new specifications for operators who have recognized the power of fiber, particularly for business services. At the top of the to-do list sits a GPON version of the DOCSIS Provisioning of EPON (DPoE) specifications, driven by the many operators who decided to fight FTTH fire with fire via GPON architectures. Completion of the first set of DPoG specifications sets the stage for a GPON vs. EPON turf war for DOCSIS-friendly all-fiber infrastructures. GPON's FTTH popularity aside, at least one source active in the space suggests that because it's more easily compatible with DOCSIS and provides a clearer path to 10-Gbps support, EPON eventually will win.
Deployments of Converged Cable Access Platform (CCAP) technology, which supports IP services delivery, also will continue in 2015. These deployments should push fiber deeper into operator networks, further solidifying the role of optical communications in cable MSO networks.
Test speeds along
Whether we're talking lab and production applications or field use, increasing data rates continue to drive test and measurement technology development.
In the lab environment, many of the drivers we saw this year will continue in 2015. These catalysts include the need to support the development of 400 Gbps and greater transmission technology for carrier networks and expected 400 Gigabit Ethernet transmission for the data center.
The oscilloscope provides the foundation for most of the lab test applications in question. Vendors will continue to expand capacity; now that Teledyne LeCroy has announced the first 100-GHz real time oscilloscope, the competition will play catch up.
As the bandwidth and capabilities of real time scopes scale, so will the cost. That will leave users wondering how much bandwidth and related horsepower they need, and whether they could use cheaper sampling oscilloscopes for a wider variety of applications. We've already seen optical modulation analysis capabilities developed for sampling scopes. The next 12 months should see more advancements along these lines.
Elsewhere in the lab, this year's run of PAM4-related-capability introductions should continue in 2015. Many of these announcements react to the expectation that the IEEE 400 Gigabit Ethernet Task Force will leverage this modulation format as it creates its specifications. Most discussions of PAM4 testing currently focus on 28- and 56-Gbps transmission rates, particularly for board traces, backplanes, and the lane rates 400GbE semiconductors likely will demand. But if the 400GbE standards makers decide to target single-wavelength 100 Gbps, we will see new demands for multilevel signaling test support emerge rapidly.
Out in the field, carriers have already begun to deploy 200-Gbps technology, according to Alcatel-Lucent. Such deployments foreshadow 400-Gbps implementation - and there isn't any field-test equipment available for either 200G or 400G as far as I know. Operators likely are relying on diagnostic capabilities built into the transmission systems, a trend I think will continue in 2015 as systems vendors increase the breadth and performance of such capabilities.
Besides this competitive challenge, field-test equipment vendors will have to start wrestling with what SDN means for them and their customers. This question has two facets. First, how to monitor and troubleshoot the software-driven flexible delivery of virtual functions in an infrastructure whose configurations will change more quickly and frequently than before. Second, it seems reasonable for users to wonder whether SDN/NFV principals can apply to test and measurement. Can test functions be virtualized? We can foresee demand for flexible test sets whose capabilities could be optimized for whatever the technician had on the agenda for a given day.
Test equipment vendors have made progress along this second axis already via cloud-enabled test sets. We should expect more of this in 2015.
Data-center requirements reshape equipment design
The data-center space has not only offered an inviting frontier for optical communications; it also has changed the rules for technology development. We'll see these new parameters continue to evolve in 2015.
The data-center market differs significantly from the carrier world. Deployment cycles are in the neighborhood of three to five years, which means products don't have to be engineered (or certified) to last 20 years. And standards requirements are becoming less strict; MSAs, particularly those that leverage more traditional standards, are frequently seen as good enough.
Nowhere was this last influence more apparent in 2014 than in the proliferation of optical-transceiver MSAs targeting 100 Gigabit Ethernet applications between 500 m and 2 km. The market is still sorting out winners and losers among these efforts, but that won't stop additional standards-alternative MSAs from springing up in 2015.
For example, debate has emerged within the IEEE 400 Gigabit Ethernet Task Force regarding the practicality of a 4×100-Gbps approach versus 8×50 Gbps. Aside from 400-Gbps communications, both approaches could lead to new methods of 100GbE support. It wouldn't be surprising to see MSAs along these lines in 2015.
We'll see the effects of 400GbE requirements in the fiber-optic cable realm as well. The TIA has opened discussion of specifications for a new multimode fiber class that would support four wavelengths. Multimode PMDs for 400GbE are expected to be based on 25-Gbps wavelengths transmitted in parallel. Using current technology, that's 32 fibers (16 in both the transmit and receive directions). The new fiber class would offer a more efficient approach - and, again, might have use at 100 Gbps.
Meanwhile, we'll see what the 25 Gigabit Ethernet effort - and debates about 50 Gigabit Ethernet - will reveal as well.
Last but not least, 2015 also should bring the results of some of the "second generation" silicon photonics efforts (after the first generation represented by such companies as Luxtera and Kotura/Mellanox).
Out on the line side, data centers also will have an effect, particularly on 100G for the metro. As already mentioned, we should see pluggable coherent modules become generally available from a variety of sources (and keep direct detect 100G a niche technology). At least some of those modules will also support 200G, thanks to new third-party DSPs.
The effects of SDN and NFV on equipment design also will become more pronounced next year. Optical-system vendors generally brush off the idea of white box optical transport gear. But as developers learn to support delivery of optical-network abstractions to controllers and orchestrators, some sort of optical function virtualization will be discussed and demonstrated next year.