Solid Reasons for Virtual Functions

Aug. 19, 2015
As the industry focuses on improving service offerings, service providers have been looking for technological solutions to address the problem ...

By Pete Koat, Incognito Software Systems

As the industry focuses on improving service offerings, service providers have been looking for technological solutions to address the problem of shrinking ARPU and increasing OPEX and CAPEX demands.

The industry - through organizations like the Broadband Forum, CableLabs and the SCTE - has responded with a number of solutions to automate tasks, improve visibility of network demands and issues, and increase capacity. Still, we have seen the margins shrink as the CAPEX and OPEX has been growing at a significantly faster rate than the revenue rates.

Figure 1: Shrinking margins: for many providers, CAPEX and OPEX are rising faster than their incoming revenue.

One weapon that service providers have in their arsenal is network functions virtualization (NFV). This approach advocates replacing hardware network elements with software running on commercial off-the-shelf servers (COTS). The advantage?

  • Server price and availability scale well, can be optimally located, rapidly deployed and upgraded; this facilitates rapid scaling based on dynamically changing pressures.
  • The functionality can be deployed wherever it is most cost-effective; in some cases, this could also be distributed.
  • Using standardization reduces yearly CAPEX and OPEX budgets.
  • Consolidation reduces power consumption within the plant.
  • Consolidation in provider datacenters derives economies of scale.

Five major trends are rising up in the NFV movement: Computation, Physics, Philosophy, Geography, and Politics.

Computation: The line between communications and computation has blurred over the years; this can best be seen with the quad-core 2.8 GHz mobile phone and sophisticated smart TVs with built-in digital decoders, DLNA clients, etc. Computing power is ubiquitous, and now has hit a price point that makes it advantageous to migrate device-specific functionality to a virtual equivalent that can be located anywhere.

Physics: When smartphones first launched, they had dedicated application-specific integrated circuit cores to perform video decoding. As CPU power and efficiencies increased, we saw the trend to move the multimedia functionality to a software function. With virtualization, we see the migration of a task from a hardware application to software.

Philosophy: Continuing with the above example, a discrete circuit will be more efficient in terms of energy consumption, but augmentation will be limited in the future, and it does not dynamically scale based on load. Virtualization provides a flexible framework with programmable, configurable capabilities.

Geography: Location flexibility is one of the major areas where an operator can save on both CAPEX and OPEX. The factors to consider regarding where the virtualized network function should be performed include cooling and energy, management, maintenance, regulatory, security, economies of scale, and even real estate costs. Incidentally, using a distributed NFV approach is preferred instead of consolidation, as consolidation to a single datacenter may not be the right answer.

Politics: Traditionally, the approach was to differentiate among routing, administration and forwarding. This resulted in the creation of a data plane, control plane and management plane, where the management plane requires human interaction and is therefore at risk as the single centralized point of failure. This approach is also siloed and slow. What software defined networking (SDN) really does is erase the difference between control and management planes.

It’s no secret that NFV complements SDN. Where NFV advocates virtualizing network functions, SDN advocates replacing network protocols with centralized software applications that may configure all the network elements in the network. The advantages of SDN are similar: agility, cost savings and simplification. One of the major wins for both SDN and NFV is the velocity at which new services can now be created in software and deployed.

There are drawbacks to virtualization. A concrete solution has cost savings for mass-produced products, miniaturization/packaging constraints, higher processing rates, and energy savings. That being so, core network elements will likely continue to reside within dedicated hardware.

However, we have seen virtualization of the termination, with virtual CCAP (vCCAP); the corollary is the residential node for the home router and set-top box functions with the virtual gateway (vCPE). In a similar fashion to the vCCAP, vCPE virtualizes the functions of the gateway. These functions could include DHCP, firewall, port forwarding, DLNA, parental controls or some future function that could retroactively be added to the service offering.

But what next?

Here’s the question that the industry is grappling with: Should service providers virtualize the entire network and gateway, or is it better to adopt a hybrid approach? Additionally, how long should you wait to transition to a virtual approach?

History suggests that with each extreme of the spectrum, specific advantages can be achieved. Typically, the ideal placement is somewhere on the spectrum that affords you the opportunity to choose the attributes that are best for the problem you are trying to solve.

Over the last decade, we have seen successive advancements in the types of gateways being deployed, from the days of simple Ethernet Access Devices (EAD) to complex gateways that support media streaming, parental controls, time blocking, multiple WiFi radios, advanced diagnostics, and even embedded MTA capabilities. In this timespan, we have seen essentially both ends of the spectrum surface. While rigid single-purpose CPE units can be more efficient than their virtual counterparts, they lack the flexibility to let operators increase time-to-market for new services and scale operations to accommodate newer, better customer service models.

The philosophy of the network function virtualization (NFV) movement is to move as much of the software stack as possible off the end-of-line device and into a virtualized environment, which can be leveraged to quickly provision and add new functional capabilities without the need for firmware updates.

A more traditional approach to the virtualized gateway would see it adopting virtualization for only new or undefined functionality. This gateway would still be a complex embedded device with all the modern-day capabilities, but with arbitrary functionality that can be dynamically loaded remotely. The challenge of continuing on that path is that you are left with a costly, non-standard platform with divergent offerings and software capabilities. The optimal approach would be to isolate and decouple as much software from hardware as possible, while still retaining today’s advanced capabilities.

Figure 2: An optimal path would require no new gateway to use virtualization, though it would require a server and BSS stack to host the virtualized cable modem capabilities, which is an additional cost. That being said, with a homogenized view on the diverse gateway, there are long-term OPEX savings.

There are many benefits to this strategy:

  • It's vendor-agnostic with a common functional software stack.
  • Standardization reduces yearly CAPEX and OPEX budgets.
  • The operator has control over creating and deploying new software capabilities to any/all gateways.
  • It provides a flexible platform with programmable, configurable capabilities, without having to wait for an updated firmware.
  • It allows the ability to deploy virtualized functionality where it is most cost-effective and derives economies of scale.
  • Consolidation reduces power consumption for outside plant
  • It reduces the number of truck rolls and customer complaints to replace legacy devices with ones capable of supporting current offerings.

When adopting these optimal strategic goals, we see the picture of an ultimate gateway: an EAD with advanced hardware capabilities - including multiple wireless radios, remote diagnostics, and other differentiating capabilities - with the software stack for the gateway being virtualized, including everything from DHCP to time blocking and CPE usage stats. Best of all, this approach to vCPE works with existing deployed gateways, virtually eliminating the barrier to adopting vCPE today and abolishing the need to wait any longer for adoption.

If you have legacy devices that cannot run arbitrary loaded functionality remotely, the choice is clear. With a comprehensive virtualization solution, you can essentially hit the virtual turbo button on your legacy devices - maximizing your sunk CAPEX, enabling you to add advanced functionality to end-of-life (EOL), unsupported, legacy devices without having to modify the firmware. Doesn’t that sound nice?

Pete Koat is the chief technology officer of Incognito Software Systems. Reach him at [email protected].

Sponsored Recommendations

The Road to 800G/1.6T in the Data Center

Oct. 31, 2024
Join us as we discuss the opportunities, challenges, and technologies enabling the realization and rapid adoption of cost-effective 800G and 1.6T+ optical connectivity solutions...

High-Speed Networking Event

Oct. 23, 2024
A Multi-Day online learning event crafted for optical communications professionals specializing in high-speed networking solutions Date: November 12-14Platinum Sponsor: AFLGold...

Understanding BABA and the BEAD waiver

Oct. 29, 2024
Unlock the essentials of the Broadband Equity, Access and Deployment (BEAD) program and discover how to navigate the Build America, Buy America (BABA) requirements for network...

On Topic: Fiber - The Rural Equation

Oct. 29, 2024
RURAL BROADBAND:AN OPPORTUNITY AND A CHALLENGE The rural broadband market has always been a challenge for service providers. However, the recent COVID-19 pandemic highlighted ...