Mobile data traffic continues to grow exponentially, driving communication service providers (CSPs) to add capacity to keep pace with demand. Whereas early LTE deployments concentrated on providing coverage, the focus is now on adding capacity through:
- more spectrum, including low, mid, and high bands
- cell densification by adding more cells or more sectors
- spectral efficiency improvements with more antennas and better intercell coordination.
At the same time, CSPs are preparing for 5G and cloud RAN (C-RAN) architectures that will enable services with very different characteristics in terms of capacity, latency, synchronization, reliability, and connectivity requirements. Their focus is on three main use cases: enhanced mobile broadband (eMBB), massive machine-type communications (mMTC), and ultra-reliable low-latency communications (uRLLC). This last use case will redefine the network edge by introducing multi-access edge computing (MEC). MEC hubs will place processing closer to users to reduce latency and improve performance by optimizing capacity and workload placement.
Mobile networks evolve to C-RAN
CSPs are increasingly implementing centralized and C-RAN architectures to accommodate uRLLC and MEC. Rethink Research estimates that centralized and virtualized cell sites will be deployed at a 23% compound annual growth rate (CAGR) between 2017 and 2025, surpassing new deployments of conventional cells in 2022. A sharp acceleration is expected in the number of edge cloud sites from 2020 onwards when operators begin to deploy RAN network functions virtualization (NFV).
Early deployments in centralized RANs will provide a solid foundation and stepping stone to move towards C-RAN. In many cases, the central offices or hub sites housing the central baseband processing can serve the MEC functions needed for 5G networks. As shown in Figure 1, the coexistence of 4G and 5G means that a mix of RAN architectures will need to be served by converged mobile transport for cost-effective delivery of all services. This includes being able to support existing fronthaul protocols, such as CPRI and OBSAI, as well as new ones defined for 5G such as eCPRI and Next-Generation Fronthaul Interface (NGFI).
Moving to C-RAN presents clear benefits. Operators achieve savings in capex and opex stemming from cell site simplification and overall system performance improvements. Specifically, the separation of the radios from the baseband units (BBUs) enables less equipment at cell sites, leading to space, power, and operational savings while also providing performance efficiencies. By supporting locally centralized baseband clusters, mobile network operators (MNOs) can enable improved coordination among the radios in the pool. This enhances the performance of advanced features, such as inter-cell carrier aggregation (CA) or coordinated multi-point (CoMP), that rely on strict coordination among cells. In addition, the centralization of the baseband enables MNOs to take advantage of pooling gains, because each radio need not be configured for peak capacity. To reap the benefits of centralized RAN, operators need to deploy high-capacity and low-latency fronthaul networks (for example, fiber or WDM-based systems).
Operators are also keen to take advantage of NFV and software-defined networks (SDNs) by moving to C-RAN architectures. Transport bandwidth and latency requirements will relax with the advent of new 5G functional splits that decompose the RAN, which will lead to greater adoption of C-RAN. These splits will enable flexible placement of key RAN functions across the radio unit (RU), distributed unit (DU), and centralized unit (CU). For example, to minimize latency for real-time applications, such as autonomous driving and remote surgery that require < 1-ms latency overall, the traffic can be directed to a DU residing at an MEC data center located in proximity (for example, within a few kilometers) to the RU. Here, the same central office that serves as the 4G C-RAN hub can become the new MEC site.
Conversely, for non-real-time applications, the traffic can be forwarded to a data center in the core cloud to take advantage of packet aggregation as the latency budget could be in the order of ~5 ms. In this example, a networked interface could be used to provide connectivity to the CU, located more than 100 km away. Figure 2 shows how the DU and CU locations can be varied to meet different network requirements. This affects the functional splits and the fronthaul network needed.
Evolution of fronthaul rates and protocols
Given the high capacity and strict latency requirements of C-RAN, existing deployments have mostly used WDM-based systems (including both passive and active variants). WDM-based systems provide better scale and more cost-effective connectivity than dedicated fibers. This is especially the case for larger sites, which have many remote radio heads (RRHs), each requiring dedicated connectivity (over separate wavelengths, for example). Early 4G LTE deployments used WDM systems capable of supporting transport rates of 2.5 Gbps to 5 Gbps. As LTE has continued to evolve to support higher capacity with more spectrum and more antennas (for example, 8T8R operation), higher 10-Gbps rates have also been deployed.
Although CPRI rates are defined up to ~25 Gbps and considerable investment in LTE-A systems will continue for years to come, the current consensus is that CPRI will not scale effectively for 5G systems operating in the high-band spectrum. These systems would use larger numbers of antennas (mMIMO) and more spectrum ― potentially in the hundreds of megahertz. Consequently, new functional splits and new protocols, such as eCPRI that take advantage of Ethernet aggregation and transport, are being defined to support 5G.
Functional splits and the move to packet switched fronthaul
CPRI is a synchronous protocol representing the time domain sampled RF signal transmitted/received from the remote RUs. Besides the bandwidth scaling limitation noted, the CPRI protocol must be terminated at a single end point, cannot be statistically multiplexed, and requires rigorously precise constant latency links between the BBU and RRHs. In addition, the RF on-air frequency is derived from the CPRI signal rate, so any rate impairments directly affect the quality of the air interface. This challenge has led to work within the standards bodies (i.e., the 3GPP, CPRI Cooperation Group, and IEEE) to develop new functional splits and protocols between the radio and baseband to alleviate both bandwidth and latency requirements.
One approach is to move some of the low-PHY processing to the radio, which reduces the transport bandwidth needed. For example, using an intra-PHY split (Option 7-2) would allow scaling by MIMO layers. Moreover, because the eCPRI interface operates within the frequency domain, it is more bandwidth efficient than the CPRI protocol, which operates in the time domain. For higher layer splits (for example, PDCP-RLC), latency requirements can be relaxed as well. However, a tradeoff in performance results from going to higher layer splits. Ultimately the splits chosen need to match the requirements of the underlying applications they are intended to support. This choice might also include using different splits for the uplink and downlink streams.
The new functional splits also enable the creation of new packetized protocols and interfaces that leverage Ethernet and the benefits that it brings. In June 2018, the CPRI Cooperation Group released version 1.2 of the eCPRI Transport Network requirements document. It defined the eCPRI protocol for low-layer intra-PHY functional splits (FS-LL) for connectivity between the RU and DU. The document outlines the delay budget and the requirements for timing accuracy requirements, as well as specifying the use of either Ethernet or IP packet fronthaul networks for better scale and aggregation.
In April, 2017, the 3GPP announced the selection of option 2 (PDCP/high RLC) as the high layer split point (FS-HL) where the F1 interface provides connectivity between the DU and CU. Likewise, the IEEE 1914 working group has been defining the NGFI specification and Radio over Ethernet (RoE) standard, which calls for the encapsulation of CPRI in Ethernet frames. All of this has contributed to the normalization of Ethernet as the fronthaul link layer transport protocol.
Deterministic Ethernet for fronthaul
Mobile fronthaul places new challenges on Ethernet transport networks, especially for delay, delay variation, packet loss, and reliability parameters. The delay requirements are particularly challenging. While Ethernet has been evolving to meet stringent audio and video requirements in industrial control and automotive applications, fronthaul transport requirements are significantly more challenging. For this reason, the IEEE 802.1CM working group has taken on the task of defining Time-Sensitive Networking (TSN) for fronthaul, including Class 1 CPRI v7.0 and Class 2 eCPRI. TSN defines profiles for bridged Ethernet networks that will carry 4G and 5G fronthaul streams.
These profiles will also include standardized synchronization requirements and TSN mechanisms for minimizing delay and controlling delay variation. The IEEE 802.1 Qbu defines a preemption mechanism to minimize delay on express traffic. For example, fronthaul traffic will be able to preempt other “best effort” traffic on the same Ethernet port even after transmission has started. In addition, because failure detection is important for network resiliency, new redundancy and failure detection capabilities are also being studied. In this regard, mechanisms like Ethernet Ring Protection (ERP) and Link Aggregation Group (LAG) will provide resiliency. As shown in Figure 3, all of these capabilities are being incorporated into new TSN packet optical switches to transport stringent 4G/5G fronthaul traffic over a deterministic Ethernet bridged network.
Clearing a path ahead with converged transport
Mobile broadband demand is continuing to drive the need for more capacity. This is continuing to fuel investments in LTE and LTE-A networks while, at the same time, driving major investments in 5G as well. A scalable, converged transport network that can tap these investments and leverage existing infrastructures is needed to cost-effectively deliver extremely scalable capacity while also maximizing fiber resources.
To enable such a converged network, commercially available “anyhaul” platforms enable the deployment of a variety of distributed RAN, centralized RAN, and C-RAN architectures. They also support high-capacity data rates, delivering minimum latency with maximum reach. These capabilities let operators optimize performance and deploy services faster ― all with a fully transparent, scalable approach that supports all services.
With multiple RAN architectures and RAN technologies in the mix, mobile transport networks must accommodate diverse protocols and traffic types (Figure 4). These must also be supported in accordance with their underlying service characteristics. In other words, mobile transport networks must have the flexibility required to support “anyhaul” while also meeting the strict synchronization and reliability requirements for each service. The optimal technology for these networks provides mobile transport in support of CPRI fronthaul, Ethernet fronthaul, and Ethernet backhaul applications.
Hector Menendez is product marketing manager, IP/Optical Networks at Nokia.