Native Packet Optical networks empower next-generation Ethernet service migration

March 21, 2011
There is still a long way to go in terms of migrating and optimizing today’s transport networks from legacy T1 and SONET/SDH services to Ethernet transport for these new services. But a Native Packet Optical approach might be the best way to get there.

We have all seen the charts that show traffic exploding throughout residential and business networks over the next few years. We’ve been presented with similar predictions since the mid 1990s (and probably earlier). Yet now it really seems to be happening. So what is different this time around?

Two main factors – attractive and viable applications and lower cost transport – combine to explain this wave of bandwidth. These two catalysts go hand in hand, as neither could drive the current rapid growth without the other.

The enterprise services industry, like the residential services industry, has a natural growth curve as day-to-day operations drive the need for fatter and fatter access pipes. But we also see a rapid rise in new applications that further drive bandwidth to the enterprise. Key to these is the start of the shift towards cloud computing and virtualization of applications that were traditionally held in-house, such as distributed email and services such as salesforce.com. Furthermore, the move to Ethernet as a common transport mechanism has allowed the industry to drive costs down in transport networks, which has further increased the viability of these new services.

But there is still a long way to go in terms of migrating and optimizing today’s transport networks from legacy T1 and SONET/SDH services to Ethernet transport for these new services. So, let us look at where these services are going as we need to understand the future, as best we can, before we can embark on a strategy to migrate our networks.

Ethernet and enterprise services


The enterprise services industry includes many vertical sectors; providers offer services to the finance industry; media; small, medium and large enterprises; and education and local government sectors to name a few. Each has its own set of specific requirements but there are some common trends.

First is the migration to Ethernet as a standard transport protocol, often as a Layer 2 service. This evolution is well underway and looks likely to continue. The work of the Metro Ethernet Forum (MEF) has greatly helped standardize the services vendors’ hardware can support and that service providers can offer. This work has gone a long way to help the uptake of Ethernet as a service to enterprises.

Of course, Ethernet will never replace everything because application-specific services will be needed for some time to come. In the SAN area there are early moves towards Fibre Channel over Ethernet (FCoE), although not yet in the WAN. Also the video distribution market is likely to use the DVB-ASI/SDI, HD-SDI, and new 3G-SDI standards for the foreseeable future. So any Ethernet-based infrastructure will need to easily handle these “legacy” services, even though some such as 8G Fibre Channel and 3G-SDI are comparatively new protocols, certainly newer than Ethernet.

Looking at the end-user services themselves, we have already noted that cloud computing and virtualization are becoming increasingly important in the enterprise services market. So, how do these services drive the requirements of a transport network? Well, for cloud computing and virtualization to work the service needs to appear to the user as a locally hosted service, and that means it has to be fast. This requires the network operator to pay particular attention to the latency of the network. Perhaps not to the same scale as when building out services to the financial services community, where every microsecond or even nanoseconds counts, but latency must still be low and controlled. Linked to this is the requirement that the variation of this latency (known as jitter) must also be as low as possible.

In addition to these requirements the network operator naturally also requires a platform that enables the lowest possible capital and operational expenditures and also the flexibility and scalability required to allow the system to grow with customer demand.

The value of a native packet optical architecture

One approach to addressing this range of requirements is the Native Packet Optical (NPO) architecture. This architecture is ideally suited to the edge of an optical network since it keeps the traffic payload as native Ethernet frames, as its name suggests. This provides an operator with better visibility and manageability of Layer 2 traffic, better aggregation of traffic to fill the pipes prior to handover to the core network and, of course, it lowers the cost of packet-optical integration at the edge of the optical network. NPO also supports non-Ethernet traffic via additional wavelengths and multi-service muxponders that multiplex Ethernet and other services together onto the same wavelength.

The NPO architecture combines the best of WDM optical layer technology with the best of Ethernet to support the transport of Ethernet and other services natively within the network. By using standard Ethernet as the transport payload for Ethernet traffic at the edge of the optical network, rather than wrapping it in an additional technology such as OTN, the network operator is able to use Layer 2 Ethernet technology to manage services and route traffic according to traffic type and service-level agreement (SLA) type, without the need to constantly unwrap and then rewrap the traffic at each node, which would add complexity and cost. In addition, the operator can use these Layer 2 devices to aggregate traffic in a more efficient manner than with OTN at the edge as shown in Figure 1.

Figure 1. OTN wrapping (top) versus Native Packet Optical (bottom) efficiencies. EDU: Ethernet Demarcation unit, EMXP: Ethernet Muxponder.



OTN of course does have a significant role to play in optical networking. It has many benefits in management of traffic across multiple network operators, such as tandem connection monitoring. But it only really makes economic sense in a network with a large proportion of Layer 2 Ethernet traffic once the payloads themselves (typically Gigabit Ethernet signals) are as full as possible. Therefore, in most cases, OTN has a role to play in the core, but should not be pushed right to the edge of the network.

NPO building blocks


Understandably, platforms that integrate Layer 2 Ethernet and Layer 1 WDM represent key elements of the NPO architecture. These systems are optimized for transport of Ethernet traffic (and are therefore known as “transport Ethernet” platforms) as well as aggregation and transport of Layer 2 Ethernet services. By combining both WDM transport and Layer 2 Ethernet demarcation, aggregation, and transport into a single unit, NPO platforms help achieve better economics and simplified networks. Considerable further operational expenditure advantages are the result, as shown in Figure 2.


Figure 2.
Carrier Ethernet to transport Ethernet economics comparison.



These NPO transport Ethernet products can also use a different switching architecture than traditional Carrier Ethernet switches, which brings many advantages beyond lower cost, lower power consumption, etc. One key advantage is lower latency and close to zero jitter. As discussed earlier, as services to enterprise customers move more and more towards cloud computing and virtualization, these parameters become increasingly important.

A good NPO system has latency and jitter that is a factor of 3 lower than the best traditional Carrier Ethernet switches. This means a latency reduction from more than 5 microseconds per unit to less than 2 microseconds; jitter lowers to a tenth of a microsecond.

Such reductions might not initially sound like a huge advantage in a network where the latency of fiber, at 5 microseconds per kilometer, is the biggest contributor to overall latency. But it is very significant in Layer 2 networks in which traffic passes through many Layer 2 devices. Each device adds a little more latency and jitter to the end-to-end communication link. Therefore, at Layer 2, lower latency and jitter in the NPO platform can have a significant impact on the overall quality of service. And with fiber-related latency typically fixed due to geography, any operator who wishes to provide a low-latency service to enterprises must consider the Layer 2 nodes in any attempt to reduce latency and provide a differentiated service.

Transport Ethernet products also can provide better Synchronized Ethernet (SyncE) support, should that be required, than typical Carrier Ethernet platforms. SyncE today is typically required for mobile backhaul services -- but an operator who has this capability inherent in the NPO transport network is future-proofed should requirements change and therefore has a further advantage in the marketplace.

NPO theory versus practice

So, this approach sounds great in theory, but how successful could it be in practice?

The NPO architecture is a recent innovation, with initial products shipping in 2010. Early deployments have been in both mobile backhaul installations due to the rush for Ethernet-based backhaul to meet the demands from smartphones, and from business Ethernet services to enterprise and local government customers. Such customers have built transport Ethernet networks for the delivery of low-cost Ethernet services with low latency, virtually zero jitter and, in the case of mobile backhaul, excellent SyncE support. One key advantage that operators have already found is the operational simplicity of combining the transport aspects of Layer 2 Ethernet with the WDM layer.

It is early days for NPO as an architecture and for packet-optical integration at the edge of optical networks generally. But judging by the initial interest and uptake, the future looks bright for both operators and vendors who plan to go down this route.

Jon Baldryis technical marketing manager at Transmode.

Sponsored Recommendations

Scaling Moore’s Law and The Role of Integrated Photonics

April 8, 2024
Intel presents its perspective on how photonic integration can enable similar performance scaling as Moore’s Law for package I/O with higher data throughput and lower energy consumption...

Coherent Routing and Optical Transport – Getting Under the Covers

April 11, 2024
Join us as we delve into the symbiotic relationship between IPoDWDM and cutting-edge optical transport innovations, revolutionizing the landscape of data transmission.

Balanced vs. Unbalanced PON: Key Differences and Deployment Impact

Nov. 7, 2023
Learn how to choose the right PON architecture for your network.

Advancing Data Center Interconnect

July 31, 2023
Large and hyperscale data center operators are seeing utility in Data Center Interconnect (DCI) to expand their layer two or local area networks across data centers. But the methods...