Cloud computing promises to reduce IT expenditures, increase network flexibility, and streamline communication infrastructure. This is the first in a series of three articles in which we will examine communications technologies as key enablers that can make cloud computing a reality. Specifically, we will look at protocols, traffic management, and the control of wide-area communications and how the application of industry-standard techniques can assure network designers robust performance in their cloud offerings.
In this article, we’ll define specific communications reference points (RPs) within cloud computing networks so that software designers and network architects can better understand and address specific problems. Each RP presents unique requirements and challenges for creating, maintaining, and managing connectivity. Creating proper communication paths on the network and configuring them correctly at these RPs, or interfaces, will maximize cloud performance. The communication RPs will be defined individually; for each RP we will analyze the type and characteristics of the data communicated, describe the expected traffic patterns at the RP, and discuss the reliability requirements, the need (or lack thereof) for real-time traffic delivery, quality of service (QoS), and other characteristics of communication technologies, such as security.
Shaping the cloud
The applications cloud computing supports may affect millions of transactions and billions of dollars of commerce. They are relied upon for decision making, planning, contracts, and legal obligations. If the communications channels are unreliable, the impact can be far-reaching and costly. Therefore, accurate, timely, and secure communications are required for many cloud applications -- and sometimes the applications themselves must account for these potential communication issues. Cloud RPs enable cloud application developers and service providers to consider these issues in a more comprehensive way.
For example, if the communication channel adds significant delay or delay variation, then application performance may be affected. Likewise if the communication network encounters congestion, it could have significant negative impacts on cloud service users. Many cloud applications require significant bandwidth. If bandwidth is inconsistent or constrained, then cloud applications may be impaired. In severe situations, time-sensitive transactions may be affected by bandwidth constraints.
Cloud network designers must contend with three major factors that can affect performance – QoS, security, and reliability. First, QoS can be assured via a number of controls so that appropriate treatment can be applied to packets depending on their importance. QoS metrics help monitor such service elements as traffic profile (e.g., average and peak rate and burst size) and delivery (e.g., delay, delay variation, and dropped and “errored” packets).
Second, security must be addressed within the cloud itself and at client devices that attach to the cloud. Encryption and authentication mechanisms therefore are recommended for cloud communications.
Finally, network reliability is also critical. We all know how significantly our work is affected when access to our corporate e-mail or servers is unavailable. Cloud networks require even higher availability standards; therefore, communication service management protocols are required to assess the performance and health of the communications channels within the cloud network.
The technology and protocols behind passing data traffic among remote resources are complex and the cloud service provider often may not own the data communications equipment and protocols used in the process. This situation will change as more cloud service providers build their own networks. Yet in both cases, it is important for a cloud provider to define requirements for communication channels in an unambiguous and industry-standard way either to the communication provider or for its internal configuration. That is why understanding the communications requirements at each RP will help to design a seamless and consistent cloud network.
Defining reference points
Let us consider and define specific communication paths between each pair of communicating elements and specify a reference point for each such path. To identify RPs for cloud communications we identify four primary communication paths. These are:
- Client Device to Application Server
- Application Server to Application Server
- Application Server to Middleware Server
- Middleware Server to File Server
Performance requirements will differ among these four paths. The data communications network must be aware of the requirements of the data flows based on the needs of each category.
Each communication path may exist within the cloud or it may be a communication path from an external cloud client to a cloud server. A cloud client, in this context, is any entity (server or a client device) that is deployed outside of the cloud but receives services from the cloud over a communication network.
Based on the diagram below, the communications RPs we identify are:
- Client to Server (Reference Point #1 and #6)
- Application Server to Application Server (Reference Point #2)
- Server to Middleware Server (Reference Point #3 and #7)
- File Server to File Server (Reference Point #4 and #8)
- File Server to Array Server (Reference Point #5)
Figure 1. Reference points in a cloud computing network.
Standards-based mechanisms exist that provide the QoS guarantees and performance monitoring for cloud computing applications. Applying these standards to the cloud computing model is relatively straightforward. Carrier Ethernet demarcation and aggregation devices are two examples of technology that incorporates the features needed for the high bandwidth and service control necessary for wide-scale cloud deployments.
Some of the RPs defined in our model should have associated bandwidth profiles and QoS parameters. The IEEE, ITU, and the MEF have addressed these issues for more than a decade in the context of Ethernet services. The protocols for Ethernet virtual circuits and service OAM are well developed and they are precisely the tools needed by cloud network providers and application developers to create cloud applications and ensure their performance. In the next article in this series, we will examine which Carrier Ethernet mechanisms apply at each of our defined RPs.
Mannix O’Connor is director of technical marketing at MRV Communications (www.mrv.com). He was chair of the Access Working Group for the MEF and founding secretary of the IEEE 802.17 Working Group. Mannix is a coauthor of the recent book Delivering Carrier Ethernet, published by McGraw-Hill.
Vladimir Bronstein is an independent consultant and has more than 20 years’ experience in telecom and data networking as a systems architect and director of software engineering. His experience encompasses broadband access, optical networking, and wireline and wireless networking as well as cable technologies. He has several patents pending for his networking innovations and has participated in industry standardization activities.