Ensuring cloud computing performance on data communications networks Part 3

March 2, 2012
In this, the third and final installment of the series, the authors highlight some of the issues related to cloud computing performance and some of the work being done to resolve these issues.

In our first two installments of this cloud computing series, we explained how software designers and network architects can use the concept of reference points (RPs) to better understand cloud communications infrastructure (see Part 1) and discussed the challenges of delivering cloud services (see Part 2). In this, the third and final installment of the series, we will highlight some of the issues related to cloud computing performance and some of the work being done to resolve these issues.

Geographically, the communication across the reference points inside or outside the cloud may happen inside a single collocated data center or between data centers across the WAN links. This latter scenario presents the challenge of providing high bandwidth between data center locations; solving this challenge can be expensive.

There are several common communication characteristics for reference points within the cloud. First, communication happens in “service transaction” units that are not associated with the client anymore; therefore, user/client identification is lost. At the same time, requirements for various service transactions may be different. Some service transactions will have real-time characteristics, for example, reading accesses to a database. Other transactions may be non-real-time, like lazy writes from database cache to database storage. The performance of still others may be significantly delayed for low-traffic times, like replications of data to remote servers.

From a quality-of-service (QoS) perspective, communication between reference points is characterized by very high speed and high volume of information exchange, since all services are centralized and running in the same cloud network. The traffic on the link that is used to communicate between two data centers may be separated into several priority classes based on its delay guarantee requirements. Protocols should also provide reliable data delivery with network-based flow control. On the links that are shared by multiple cloud operators and provide communication pipes between multiple data centers, “per-virtual-pipe” guarantees and isolation protocols should be established, while still maintaining priorities within each pipe.

The right protocol for the job
Communication issues within the data center are being addressed by the IEEE 802.1 DCB (Data Center Bridging) initiative, which extends standard Ethernet with several features that partially address the requirements we describe. In particular, DCB adds per-priority, link-level flow control; explicit congestion notification; assignment of bandwidth limit for each traffic priority; and a common protocol through which to exchange those capabilities in the network.

Another significant trend in data center communication today is the consolidation of all communication into pure Layer 2 Ethernet. The result eliminates IP routing, Fibre Channel, and other technologies and confines everything to Ethernet switching. The practical consequence of this consolidation is the SAN is based on the same Ethernet switching technology as the rest of the cloud network. Therefore, some of the distance/latency issues associated with long-distance SAN transmissions are eliminated. In addition, it means that QoS configurations and bandwidth profiles can be consistent network-wide from end to end.

The sophisticated mechanisms for Ethernet link-level multi-priority backpressure and endpoint flow-control congestion notifications are important. They provide efficient and reliable high-speed communication within the cloud and mitigate some of the issues of transport-level protocols like TCP.

The Metro Ethernet Forum (MEF) addresses problems of intra-cloud communication as well. It has defined methods for organizations to build Ethernet virtual circuits (EVCs) with high-capacity pipes and defined rates, while preserving priority handling of traffic within those pipes. It is also important that the MEF service definitions confine all communication to Layer 2 networks. As a result, the endpoint IP addresses may belong to a single large IP network, within which an IP address may be migrated without change from any point to any other network port.

Reference points for the cloud
Applying industry-standard techniques to the reference points we have described in this series can assure cloud network designers and application developers of robust application performance for future cloud network projects.

The reference points are relevant for the new work being done on cloud communications protocols in the IEEE and specifications at the MEF. As cloud networks grow, new separations will occur between and among cloud elements. New cloud elements will arise. New and specialized APIs to connect cloud elements will be developed. These new elements will be integrated into telecommunications networks and Web applications.

However, there is already sufficient bandwidth and there are adequate traffic management techniques to ensure the continued growth of cloud networks. Looking out a few years, it is easy to imagine a time when cloud services become indispensable to business operations and to people’s daily lives.

Mannix O’Connor is director of technical marketing at MRV Communications. He was chair of the Access Working Group for the MEF and founding secretary of the IEEE 802.17 Working Group. Mannix is a coauthor of the recent book Delivering Carrier Ethernet, published by McGraw-Hill.

Vladimir Bronstein is an independent consultant and has more than 20 years’ experience in telecom and data networking as a systems architect and director of software engineering. His experience encompasses broadband access, optical networking, and wireline and wireless networking as well as cable technologies. He has several patents pending for his networking innovations and has participated in industry standardization activities.




Sponsored Recommendations

Scaling Moore’s Law and The Role of Integrated Photonics

April 8, 2024
Intel presents its perspective on how photonic integration can enable similar performance scaling as Moore’s Law for package I/O with higher data throughput and lower energy consumption...

Supporting 5G with Fiber

April 12, 2023
Network operators continue their 5G coverage expansion – which means they also continue to roll out fiber to support such initiatives. The articles in this Lightwave On ...

Coherent Routing and Optical Transport – Getting Under the Covers

April 11, 2024
Join us as we delve into the symbiotic relationship between IPoDWDM and cutting-edge optical transport innovations, revolutionizing the landscape of data transmission.

From 100G to 1.6T: Navigating Timing in the New Era of High-Speed Optical Networks

Feb. 19, 2024
Discover the dynamic landscape of hyperscale data centers as they embrace accelerated AI/ML growth, propelling a transition from 100G to 400G and even 800G optical connectivity...