Optical Internet: Redefining the network paradigm

May 1, 2001
SONET/SDH, ATM, IP

How will we meld today's technology to create efficient multiservice networks?

FAIZEL LAKHANI, Caspian Networks

In examining the evolution of multiservice network models, it's clear that two paradigms have consistently been in opposition: the connection-oriented approach (with out-of-band signaling) and the connectionless approach. Neither of these models has managed to solve the fundamental problems of reliability, scalability, and guaranteed quality-of-service (QoS) delivery.

The challenge lies in providing dynamic interaction, smooth migration, and distributed intelligence. Approaches and strategies vary and include evolutionary QoS models, innovative multiplexing technologies, integrated packet/optical strategies, and increasingly scalable products-each of which offers pros and cons.

Optical technologies have been integrated aggressively into carriers' networks since the late 1990s. Will optical-based systems ultimately replace packet switching? This question is hotly debated. The answer seems clear: These technologies will need to work efficiently together or neither will achieve their promise.

During the last decade two major multiplexing technologies emerged: ATM and Multiprotocol Label Switching (MPLS). Based largely on its promise as a unifying technology, ATM grew rapidly and has evolved to become a staple technology in today's networks. MPLS's market acceptance looks as though it's on a similar track as ATM.

MPLS emerged from several Internet Protocol (IP) switching technologies of the mid-1990s. The initial drivers were improved performance in routing lookups and a desire to move away from IP-over-ATM overlay models to more scalable integrated peer models. Core routers were already acquiring a reputation as less than reliable and not adequately scalable. Therefore, network architects looked increasingly to switches as routers' eventual replacements.

At first, MPLS was synonymous with using IP control to manage ATM switches. In time, software and hardware evolutions made the performance argument obsolete. MPLS then became more closely associated with its traffic engineering capabilities-notably, its ability to define paths within a network that vary from those derived by dynamic routing protocols.

Traffic engineering was only a step removed from an overlay model, however. Networks were still fighting most of the drawbacks, prevalent with the IP-over-ATM overlay model-the same drawbacks that triggered the initial move toward the MPLS framework in the first place.

If not new in concept and philosophy, what is MPLS? From a practical perspective, MPLS is no more than a unifying multiplexing technology for a variety of services, mainly restricted to IP services today and progressively evolving toward the support of ATM, frame relay, and time-division multiplexing (TDM) services.

MPLS is evolving to become what ATM was meant to be. Will MPLS be successful in achieving its promise? It appears to be on its way to doing so.

While MPLS clearly solves some traffic engineering challenges, it fails to address reliability, resource utilization, scalability, and QoS concerns. A shift is required to combine the benefits of out-of-band signaling with the performance advantages of routed networks.

More to the point, is MPLS sufficient to cope with the Internet's exponential growth? MPLS is necessary, but is certainly not adequate. To scale, flexibility must be brought back to the Internet and underlying protocols.

The convergence of telecommunications and data communications cannot happen without a convergence of the underlying technologies. Telecom networks traditionally were based on connection-oriented paradigms, which promised reliability and QoS guarantees, but also an overall rigidity. IP networks, on the other hand, are associated with connectionless paradigms requiring inherent flexibility.

Optical technology has been integrated aggressively into carriers' networks since the late 1990s. That will no doubt continue, given that advanced Internet applications (videophone, online gaming, television, MP3, etc.) are creating mushrooming bandwidth demands.
Figure 1. Available network physical capacity and required network capacity reached a crossover point in 1997; since then, the need for capacity has outstripped demand.

The need for network capacity will outstrip supply. Internet traffic is doubling every six months. The rate of optical-technology innovation from OC-48 (2.5 Gbits/sec) to OC-192 (10 Gbits/sec) and OC-192 to OC-768 (40 Gbits/sec) is approximately three years.

Figure 1 illustrates a clear crossover between the available physical capacity and required capacity of the network. That points to the need for mechanisms that enable more efficient utilization of network resources and a tighter integration of multiplexing and transmission mechanisms.

The emergence of WDM/DWDM technology has addressed part of the problem by increasing fiber capacity by several orders of magnitude. But that's just the tip of the iceberg. Larger optical transport pipes using more and more frequencies re quire a substantial amount of associated hardware, with dramatic power requirements. Given the realities of budget constraints and a seemingly endless lit any of power crises-with, ironically, the most recent wave hitting California (the cradle of high technology)-a shift is required in how to make maximum use of the capacity.

Advances in photonic-switching systems are fundamental to network evolution and simplification. Such advances not only provide improvement in terms of power and space, but also help minimize the number of electrical-to-optical exchanges, delivering significant cost and performance benefits.

Effective transmission of large traffic aggregates at very high speed is one of the most fundamental enablers of the Internet infrastructure. Nevertheless, this development alone is not sufficient. Packet-layer aggregation and higher-layer intelligence must closely follow.

Packet networks provide statistical multiplexing and aggregation in the only way logical today-IP. Optical networks provide path-level connectivity; multiplexing would sacrifice considerable capacity due to circuit behaviors, otherwise. Packet-level traffic aggregation allows for efficient loading of optical trunks; in fact, that's the only way to handle the operation at this time.

Intelligent packet switching can offer fast recovery of network outages caused by backhoe fade or other equipment failures. Dynamic rerouting of traffic with minimal latency and loss of data can occur in the order of tens of milliseconds using packet switching. The best optical switches offer recovery times of about 50 msec, with the additional challenge of re-computing packet-switching adjacencies at IP (which can take many seconds). Optical switching also assumes that network bandwidth is freely available to reallocate when needed.
Figure 2. Today's network infrastructure comprises disparate layers, creating a complex and inefficient environment.

The industry needs more efficient, flexible, and dynamic utilization of network resources. That's evident by the emergence of photonic switching and the increased coupling of intelligent packet switching with optical technologies. Augmenting lower layers' efficiency with higher layers' intelligence is clearly a winning formula (see Figure 2).

Today, packet networks and transmission networks are viewed as separate entities from a technological, management, and organizational perspective. Innovations in both the electronic and photonic worlds are leading to a collapse of this demarcation. These networks are moving from strict separation to smooth, evolving integration. The packet layer and the photonic layer are likely to merge, complementing each other in a very dynamic way, with a progressive distribution of the intelligence across both entities. The challenge is in providing dynamic interaction, smooth migration, and distributed intelligence.

Will photonic switching ultimately replace packet switching? Not within a period of five to 10 years. Rather, it appears likely that these technologies will need to work efficiently together, or neither will achieve their promise.

Expanding edge and core networks has always been synonymous with forklift upgrades: replacing an outmoded infrastructure with a more powerful, flexible, scalable one. The recurring expense is staggering, and the end result is often more complex and less reliable than the system it replaced. That's largely because upgrades usually mean adding new routers and more routers to a network. That often involves heavy configuration and management overhead.

Moreover, adding routers and replacing equipment can affect routing and L3 topologies. That's probably one of the most critical challenges service providers will face in years to come. Service providers, out of necessity, put more effort into adding bandwidth than providing and optimizing their service offerings. These problems will only intensify if the industry follows its current evolution.

The solution to this problem is overall network simplification. Service providers should be able to expand their networks by increasing the performance and reach of already-deployed devices, not by adding new boxes and replacing old ones. Routers and networks should scale by design.

Scaling by simplifying the network enables increased reliability. By decreasing the amount of churn in the infrastructure and the complexities in the topologies, the network is less subject to software and configuration failures. Next-generation equipment providers need to deliver breakthrough multilayer solutions that preserve carriers' investments in existing equipment and lay a solid foundation for many years of service.

In the absence of a grand plan, network evolution has largely involved tinkering around the edges of the same basic model that has been in use for more than a quarter-century. What's needed now is a reevaluation of the basic technology building blocks. A new solution that can achieve the goals of multiservice networks while providing a clear growth path from today's networks is required. Next-generation equipment providers must consider all network layers when developing their solutions, not just the physical or data-link layers.

The objective is lofty, and the path is far from straight. We must resolve numerous problems and avoid the usual pitfalls. Is today's approach to building and designing multiservice network infrastructures the best way to go? Are existing QoS mechanisms effective? Are today's multiplexing technologies the right solutions? Will photonic switching replace packet switching?

These are all hotly debated questions. The answers lie in some intermediate gray area and don't lend themselves to simply "yes" or "no" constructs. But ultimately, these answers will determine how well we develop the multiservice network infrastructure that's going to comprise the next-generation Internet.

Faizel Lakhani is vice president of network solutions at Caspian Networks (San Jose, CA). He can be reached at the company's Website, www.caspian.com.

Sponsored Recommendations

Balanced vs. Unbalanced PON: Key Differences and Deployment Impact

Nov. 7, 2023
Learn how to choose the right PON architecture for your network.

The Pluggable Transceiver Revolution

May 30, 2024
Discover the revolution of pluggable transceivers in our upcoming webinar, where we delve into the advancements propelling 400G and 800G coherent optics. Learn how these innovations...

Supporting 5G with Fiber

April 12, 2023
Network operators continue their 5G coverage expansion – which means they also continue to roll out fiber to support such initiatives. The articles in this Lightwave On ...

Data Center Interconnection

June 18, 2024
Join us for an interactive discussion on the growing data center interconnection market. Learn about the role of coherent pluggable optics, new connectivity technologies, and ...