Accelerating SDN and NFV performance

Jan. 29, 2015
The benefits of analysis acceleration are well known. But should such appliances be virtualized?

The benefits of analysis acceleration are well known. But should such appliances be virtualized?

By DAN JOE BARRY

As software-defined networks (SDNs) and network functions virtualization (NFV) gain wider acceptance and market share, the general sentiment is that this shift to a pure software model will bring flexibility and agility unknown in traditional networks. Now, network engineers face the challenge of managing this new configuration and ensuring high performance levels at speeds of 10, 40, or even 100 Gbps.

Creating a bridge between the networks of today and the software- based models of the future, virtualization-aware appliances use analysis acceleration to provide real time insight. That enables event-driven automation of policy decisions and real time reaction to those events, thereby allowing the full agility and flexibility of SDN and NFV to unfold.

Issues managing SDN, NFV

Given the fact that a considerable investment has been made in operations support systems (OSS)/business support systems (BSS) and infrastructure, managing SDN and NFV proves a challenge for most telecom carriers. Such management must now be adapted not only to SDN and NFV, but also to Ethernet and IP networks.

The Fault, Configuration, Accounting, Performance and Security (FCAPS) model of management first introduced by ITU-T in 1996 is what most of the OSS/BSS systems installed have as their foundation. This concept was simplified in the Enhanced Telecom Operations Map (eTOM) to Fault, Assurance, and Billing (FAB). Management systems tend to focus on one of these areas and often do so in relation to a specific part of the network or technology, such as optical access fault management.

The foundation of FCAPS and FAB models was traditional voice-centric networks based on PDH and SDH. They were static, engineered, centrally controlled and planned networks where the protocols involved provided rich management information, making centralized management possible.

Still, there have been attempts to inject Ethernet and IP into these management concepts. For example, call detail records (CDRs) have been used for billing voice services, so the natural extension of this concept is to use IP detail records (IPDRs) for billing of IP services. xDRs are typically collected in 15-minute intervals, which are sufficient for billing. In most cases, that doesn't need to be real time. However, xDRs are also used by other management systems and programs as a source of information to make decisions.

The problem here is that since traditional telecom networks are centrally controlled and engineered, they don't change in a 15-minute interval. However, Ethernet and IP networks are completely different. Ethernet and IP are dynamic and bursty by nature. Because the network makes autonomous routing decisions, traffic patterns on a given connection can change from one IP packet or Ethernet frame to the next. Considering that Ethernet frames in a 100-Gbps network can be transmitted with as little as 6.7 nsec between each frame, we can begin to understand the significant distinction when working with a packet network.

Not a lot of management information is provided by Ethernet and IP, either. If a carrier wants to manage a service provided over Ethernet and IP, it needs to collect all the Ethernet frames and IP packets related to that service and reassemble the information to get the full picture. While switches and routers could be used to provide this kind of information, it became obvious that continuous monitoring of traffic in this fashion would affect switching and routing performance. Hence, the introduction of dedicated network appliances that could continuously monitor, collect, and analyze network traffic for management and security purposes.

Network appliances as management tools

Network appliances have become essential for Ethernet and IP, continuously monitoring the network, even at speeds of 100 Gbps, without losing any information. And they provide this capability in real time.

Network appliances must capture and collect all network information for the analysis to be reliable. Network appliances receive data either from a Switched Port Analyzer (SPAN) port on a switch or router that replicates all traffic or from passive taps that provide a copy of network traffic. They then need to precisely timestamp each Ethernet frame to enable accurate determination of events and latency measurements for quality of experience assurance. Network appliances also recognize the encapsulated protocols as well as determine flows of traffic that are associated with the same senders and receivers.

Appliances are broadly used for effective high performance management and security of Ethernet and IP networks. However, the taxonomy of network appliances has grown outside of the FCAPS and FAB nomenclature. The first appliances were used for troubleshooting performance and security issues, but appliances have gradually become more proactive, predictive, and preventive in their functionality. As the real time capabilities that all appliances provide make them essential for effective management of Ethernet and IP networks, they need to be included in any frameworks for managing and securing SDN and NFV.

Benefits of analysis acceleration

Commercial off-the-shelf servers with standard network interface cards (NICs) can form the basis for appliances. But they are not designed for continuous capture of large amounts of data and tend to lose packets. For guaranteed data capture and delivery for analysis, hardware acceleration platforms are used, such as analysis accelerators, which are intelligent adapters designed for analysis applications.

Analysis accelerators are designed specifically for analysis and meet the nanosecond-precision requirements for real time monitoring. They're similar to NICs for communication but differ in that they're designed specifically for continuous monitoring and analysis of high speed traffic at maximum capacity. Monitoring a 10-Gbps bidirectional connection means the processing of 30 million packets per second. Typically, a NIC is designed for the processing of 5 million packets per second. It's very rare that a communication session between two parties would require more than this amount of data.

Furthermore, analysis accelerators provide extensive functionality for offloading of data pre-processing tasks from the analysis application. This feature ensures that as few server CPU cycles as possible are used on data pre-processing and enables more analysis processing to be performed.

Carriers can assess the performance of the network in real time and gain an overview of application and network use by continuously monitoring the network. The information can also be stored directly to disk, again in real time, as it's being analyzed. This approach is typically used in troubleshooting to determine what might have caused a performance issue in the network. It's also used by security systems to detect any previous abnormal behavior.

It's possible to detect performance degradations and security breaches in real time if these concepts are taken a stage further. The network data that's captured to disk can be used to build a profile of normal network behavior. By comparing this profile to real time captured information, it's possible to detect anomalies and raise a flag.

In a policy-driven SDN and NFV network, this kind of capability can be very useful. If performance degradation is flagged, then a policy can automatically take steps to address the issue. If a security breach is detected, then a policy can initiate more security measurements and correlation of data with other security systems. It can also go so far as to use SDN and NFV to reroute traffic around the affected area and potentially block traffic from the sender in question.

Using real time capture, capture-to-disk, and anomaly detection of network appliances with hardware acceleration, SDN and NFV performance can be maximized through a policy-driven framework.

Requirements, constraints

Network appliances can be used to provide real time insight for management and security in SDN and NFV environments. But a key question remains: Can network appliances be fully virtualized and provide high performance at speeds of 10, 40, or even 100 Gbps?

Because network appliances are already based on standard server hardware with applications designed to run on x86 CPU architectures, they lend themselves very well to virtualization. The issue is performance. Virtual appliances are sufficient for low speed rates and small data volumes but not for high speeds and large data volumes.

Performance at high speed is an issue even for physical-network appliances. That's why most high performance appliances use analysis acceleration hardware. While analysis acceleration hardware frees CPU cycles for more analysis processing, most network appliances still use all the CPU processing power available to perform their tasks. That means virtualization of appliances can only be performed to a certain extent. If the data rate and amount of data to be processed are low, then a virtual appliance can be used, even on the same server as the clients being monitored.

It must be noted, though, that the CPU processing requirements for the virtual appliance increases once the data rate and volume of data increase. At first, that will mean the virtual appliance will need exclusive access to all the CPU resources available. But even then, it will run into some of the same performance issues as physical-network appliances using standard NIC interfaces with regard to packet loss, precise timestamping capabilities, and efficient load balancing across the multiple CPU cores available.

Network appliances face constraints in the physical world, and virtualization of appliances can't escape them. These same constraints must be confronted. One way of addressing this issue is to consider the use of physical appliances to monitor and secure virtual networks. Virtualization-aware network appliances can be "service-chained" with virtual clients as part of the service definition. It requires that the appliance identify virtual networks, typically done using VLAN encapsulation today, which is already broadly supported by high performance appliances and analysis acceleration hardware. That enables the appliance to provide its analysis functionality in relation to the specific VLAN and virtual network.

Such an approach can be used to phase in SDN and NFV migration. It's broadly accepted that there are certain high performance functions in the network that will be difficult to virtualize at this time without performance degradation. A pragmatic solution is an SDN and NFV management and orchestration approach that takes account of physical- and virtual-network elements. That means policy and configuration doesn't have to concern itself with whether the resource is virtualized or not but can use the same mechanisms to "service-chain" the elements as required.

A mixture of existing and new approaches for management and security will be required due to the introduction of SDN and NFV. They should be deployed under a common framework with common interfaces and topology mechanisms. With this commonality in place, functions can be virtualized when and where it makes sense without affecting the overall framework or processes.

Bridging the gap

SDN and NFV promise network agility and flexibility, but they also bring numerous challenges regarding performance due to the high speeds that networks are beginning to require. It's crucial to have reliable real time data for management and analytics, which is what network appliances provide. These appliances can be virtualized, but that doesn't prevent the performance constraints of physical appliances from applying to the virtual versions. Physical and virtual elements must be considered together when managing and orchestrating SDN to ensure that virtualization-aware appliances bridge the gap between current network functions and the up and coming software-based model.

DAN JOE BARRY is vice president of marketing at Napatech and has more than 20 years' experience in the IT and telecom industry.

Sponsored Recommendations

Scaling Moore’s Law and The Role of Integrated Photonics

April 8, 2024
Intel presents its perspective on how photonic integration can enable similar performance scaling as Moore’s Law for package I/O with higher data throughput and lower energy consumption...

Coherent Routing and Optical Transport – Getting Under the Covers

April 11, 2024
Join us as we delve into the symbiotic relationship between IPoDWDM and cutting-edge optical transport innovations, revolutionizing the landscape of data transmission.

Constructing Fiber Networks: The Value of Solutions

March 20, 2024
In designing and provisioning a fiber network, it’s important to think of it as more than a collection of parts. In this webinar, AFL’s Josh Simer will show how a solution mindset...

Supporting 5G with Fiber

April 12, 2023
Network operators continue their 5G coverage expansion – which means they also continue to roll out fiber to support such initiatives. The articles in this Lightwave On ...