Cloud Control Plane Simplifies Network Slicing for Edge Transport

July 29, 2019
A mix of SDN, NFV, and end-to-end network slicing for the RAN and edge transport and core networks is a much more flexible and cost-efficient way to address diverse 5G requirements than the use of overlay networks.

Next-generation 5G technology opens many new applications previously unavailable in wireless, thanks to network performance enhancements such as increased capacity levels of up to 100X and super-low-latency operation. This technology advancement will not only enhance the entertainment industry with streaming high-quality video, but will extend to medical innovations via remote surgery, manufacturing productivity and quality increases using robotics, along with more energy-efficient transportation through autonomous vehicle operation.

With each of these use cases there are diverse challenges in the communications network where capacity and latency performance will require equipment placement and scaling operations that are in conflict with one another. To maximize performance in the enhanced mobile broadband network, a centralized radio access network (C-RAN) architecture is used to provide cell-site aggregation. This approach improves performance and resource pooling, providing efficient asset deployment while simplifying RAN engineering. Alternatively, use cases that require ultra-low latency, such as remote surgery and autonomous vehicle operation, are better served by a distributed RAN (D-RAN) architecture where all the RAN components are deployed at the cell site.

Traditionally, such diverse applications would be addressed by building edge transport overlay networks. While these overlay networks guarantee performance and isolation from one use case affecting the other, they are cost-prohibitive from both a capital and operational perspective with two networks to operate, maintain, and scale. Put another way, there are potentially two sets of RAN and transport equipment; support resources; skill training; and spares inventory to deploy, maintain, and manage.

Virtual networks via network slicing

There is a much more flexible and cost-efficient way to address 5G requirements using a mix of software-defined networking (SDN), network functions virtualization (NFV), and end-to-end network slicing for the RAN and edge transport and core networks. This RAN virtualization sets the groundwork for a single physical network infrastructure representing multiple virtual network configurations each representing a network slice, hence the term “network slicing.” Each network slice is a complete virtual network within the infrastructure.

To dynamically accommodate the diverse topology configurations of the RAN components – radio unit (RU), distribution unit (DU), and central unit (CU) – the DUs and CUs are virtualized onto x86 blades to enable instantiation of these functions on demand throughout the network. Installing x86 blades at various strategic points within the network enables the network operator to turn up instances of DU and CU user plane (UP) terminations.  

In the same way, the edge transport network establishes a common infrastructure using programmable and disaggregated network elements. Edge transport routers are used from the DU, where the network slice point begins, to the core to offer dynamic multipoint connectivity. To assist in maintaining a predictive low-latency operation, MPLS segment routing (MPLS-SR) is the common infrastructure technology that facilitates network slicing. 

Traditional router architectures are vertically integrated, self-contained network elements. They consist of a chassis with line cards deployed in predefined slots along with switch fabric and control cards in other slots. Connectivity between line cards and switch cards is enabled via electrical backplane traces commonly referred to as serializer/deserializer (SERDES). The number of traces between slots and the speed with which the traces are clocked determines the maximum inter-slot communication capacity. This architecture requires the alignment of three hardware components: the line card, the switch fabric cards, and the backplane. Service providers are challenged in three areas when specifying a router platform for their 5G network:

  1. Determining the right capacity and performance for the site demands
  2. Minimizing the physical and environmental allocations
  3. Scaling platform capacity and performance for the long term.

Control in the cloud

Router vendors typically offer a mixture of low-, medium-, and high-capacity performance units. Sizing the integrated router capacity is a risk as the control plane, backplane speed, and chassis capacity limit the performance and scaling of the user plane blades. Under-allocating the router performance can risk loss of opportunity, whereas over-allocated router performance results in capex inefficiency. 

During initial installation only 20% to 30% of the router capacity is utilized but the chassis footprint, power, and thermal reserve all have to be fully allocated, resulting in cost-inefficiencies. Anytime the capacity of the slots is increased, all three elements must move in lockstep.

Because service providers loathe the idea of fork-lifting the chassis/backplane, vendors try to future-proof their node designs to support capacity expansions, including cooling, power, and backplane traces. However, as any chassis design utilizes the most cost-effective, commercially available technology at the time, there are limits to how far vendors can future-proof the network element. Once exhausted, additional capacity enhancements require replacement with a newer chassis. 

To resolve the limitations of a traditional router we look at a next-generation router that employs a programmable disaggregated control and user plane architecture. The control plane is completely independent of the user plane, and in advanced models it is hosted and executed in the cloud. Incorporating cloud native technology and routing protocol isolation into the disaggregated router via a cloud control plane model results in a single 1RU blade element capable of producing hundreds of router instances dynamically for RAN services and customer isolation. The virtual routing segments, quality of service (QoS), and resiliency requirements are provisioned in the cloud using automation for the virtualized service.

Once the cloud control plane calculates mapping for each service, the control information is then pushed down to the router user plane infrastructure. If a physical site were to have a catastrophic failure, its virtual routing profile can be moved in the cloud control plane to another physical site, simplifying resiliency operations. Applying this architecture to the router optimizes physical/environmental cost-efficiencies, simplifies network engineering, reduces infrastructure capacity risks, and offers superior performance scaling. 

Scalable slices

The network orchestrator coordinates this ecosystem between the core, edge transport, and RAN elements. As network slices are established via DU asset allocations and multiple CU-UP terminations, the transport network establishes router instances to support these individual services and customers providing the transport QoS guarantees.

A traditional router architecture with integrated control and user plane is initially cost-inefficient, has risks of over or under performance based on chassis size, and has limited scaling functionality over the long term. On the other hand, the disaggregated cloud control and user plane router approach establishes a single transport infrastructure with the ability to dynamically virtualize multiple networks cost-effectively and yet in a highly scalable fashion. As today’s networks continue to evolve, this dynamic flexibility will be key to allowing tomorrow’s architecture to meet diverse needs for capacity, latency, and performance, fulfilling the 5G promise.

Joe Mocerino is global solution architect at Fujitsu Network Communications.

About the Author

Joe Mocerino | Global Solution Architect, Fujitsu Network Communications

Joe Mocerino is a principal solutions architect at Fujitsu. Joe oversees solutions strategy and technical marketing for the Fujitsu 1FINITY, Smart xHaul and Smart Optics portfolios. He has written numerous whitepapers and served in speaking roles for telco and MSO forums, currently focusing on Mobile xHaul Optimization and service delivery. Joe has a thirty year track record in product line management, marketing, business development, sales, engineering and manufacturing. Joe’s technology expertise includes Packet Optical Networking, CPRI/eCPRI Optical Fronthaul and Network Slicing.  Joe holds a Bachelor of Science degree in Electrical Engineering from Fairleigh Dickinson University in Teaneck, NJ.

Sponsored Recommendations

Meeting AI and Hyperscale Bandwidth Demands: The Role of 800G Coherent Transceivers

Nov. 25, 2024
Join us as we explore the technological advancements, features, and applications of 800G coherent modules, which will enable network growth and deployment in the future. During...

Understanding BABA and the BEAD waiver

Oct. 29, 2024
Unlock the essentials of the Broadband Equity, Access and Deployment (BEAD) program and discover how to navigate the Build America, Buy America (BABA) requirements for network...

Next-Gen DSP advancements

Nov. 13, 2024
Join our webinar to explore how next-gen Digital Signal Processors (DSPs) are revolutionizing connectivity, from 400G/800G networks to the future of 1.6 Tbps, with insights on...

The Road to 800G/1.6T in the Data Center

Oct. 31, 2024
Join us as we discuss the opportunities, challenges, and technologies enabling the realization and rapid adoption of cost-effective 800G and 1.6T+ optical connectivity solutions...