Dynamic bandwidth allo-cation sets GPONs apart

July 1, 2008

by Amir Sheffer

PON technology has seen increasing deployment in the broadband access network as a premier approach for delivery of high bandwidth to consumers and businesses in all geographic regions. It provides a low cost of deployment, a low cost of maintenance, and flexible delivery of high bandwidth to subscribers.

Using a single fiber shared among many subscribers is the key to lowering deployment costs, but this raises a new (to the broadband access network) technical challenge: how to share the fiber in ways tailored to fit the individual service-level agreements (SLAs) for each subscriber. On the one hand the service provider wants to deploy as little bandwidth as possible to meet all of the subscribers' bandwidth guarantees. On the other hand it wants to provide extra bandwidth for premium services for which it can charge. The solution to this sharing dilemma is an effective bandwidth allocation scheme that gives the service provider the ability to tailor bandwidth delivery based on service requirements and subscriber needs.

For EPON-based networks, dynamic bandwidth allocation (DBA) algorithms are well known and field proven with high performance and stability in high-volume deployments in Asia. For ITU-T G.984-defined GPONs, an explicit bandwidth allocation specification was intentionally left out of the standard, which included just a description of the mechanisms that could be used for allocating upstream bandwidth.

Choosing the right DBA algorithm is key to a successful implementation of a GPON. The correct choice will enable the service provider to deliver the optimum mix of bandwidth tailored to individual subscriber needs and give the provider the ability to reshape its broadband access network via software rather than an expensive hardware reconfiguration.

As shown in Fig. 1, a GPON deployment includes optical line terminals (OLTs) in the central office and optical network terminals (ONTs) at the customer premises or at the curb outside a residence or apartment building. It uses a tree and tree-and-branch topology and inexpensive optical fiber shared and split to individual subscribers. The OLTs connect to the IP network and deliver voice, data, and video services to the subscriber through the GPON.

It is practical for up to 64 ONTs to share a single fiber, with the standard defining up to 128 ONTs. However, sharing of a single fiber requires careful design of the bandwidth allocation in the network.
The shared use of infrastructure is a key to GPON economics. How to share that bandwidth is a question carriers must answer with the services they plan to support in mind.

Network operators must plan for flexible service deployments, as future needs are not easily anticipated. For example, the unexpected popularity of two bandwidth-intensive traffic consumers, peer-to-peer applications and video/audio upload to sites such as YouTube or Flickr, has soared in the past three years. As shown in Fig. 2, global Internet traffic is expected to quadruple between 2006 and 2011, according to Cisco Systems.

A GPON includes a mix of services. Some, such as voice over IP (VoIP) or native TDM, require constant upstream bandwidth, and the OLT may statically allocate the bandwidth for these services.

Other IP-based services, such as Internet browsing, streaming video, file sharing, and file download, are bursty by nature. To achieve the highest upstream bandwidth utilization, the OLT should allocate the upstream bandwidth for these services dynamically using DBA. With a good DBA algorithm, the GPON upstream channel can be oversubscribed, thus increasing the number of ONTs that can connect to the network.

A simple example is a network with 32 subscribers, where each may be allocated upstream bursts of up to 100 Mbits/sec. The required capacity for this network is 3.2 Gbits/sec, nearly 3� more than GPON upstream network capacity, defined by the standard as 1.244 Gbits/sec. With a good DBA, these data rates to subscribers can be supported and the service provider can charge for the full bandwidth service to multiple users.
Figure 2. A variety of applications should cause global consumer Internet traffic to conitinue to grow strongly for the forseeable future.

With the transition to a full IP traffic model, applications are expected to become even burstier. As a result, the efficiency of the DBA algorithm becomes even more important. Efficiency directly affects latency. For example, a burst of requests arriving from all ONTs accumulating to 3.75 Gbits/sec over 10 msec would be cleared after 30 msec with 100% efficiency. In contrast, if the efficiency is only 50%, it would take 60 msec to clear. Not having an efficient DBA means that all of the active ONTs notice an additional 30-msec average delay in this case.

Latency is an important parameter in which the maximum limit plays a more significant role than the average value. Average latency of 1.5 msec and a maximum value of 5 msec provide a different quality of experience than an average latency of 1.5 msec and a maximum latency of 50 msec. Latency is mostly negatively affected (increased) by the amount of time DBA takes to adjust to varying traffic patterns.

In the GPON, the OLT informs the ONTs of the upstream bandwidth allocation by transmitting bandwidth mapping messages (BWMAPs), which are built of multiple bandwidth allocations for the individual ONTs or the ONT transmission containers (T-CONTs). Each bandwidth allocation is an indication to an ONT to transmit in a defined time slot. The essence of DBA is dynamically calculating the BWMAPs to allocate the right bandwidth for each ONT.

Early generations of PON allocate their upstream bandwidth using static, TDM-like allocation. Each ONT gets its predefined bandwidth allocation whether it uses it or not. This is ideal if all services in the network require constant allocation (VoIP or TDM) or when the provisioned upstream bandwidth is low enough, but it has low efficiency in the high-speed, high-utilization GPON. As long as the ONT traffic keeps arriving at a fixed rate, upstream utilization is good. Once the ONT goes idle, as shown for ONT B and ONT C in Fig. 3, its statically allocated bandwidth is unavailable to other ONTs in the network, and the overall upstream utilization degrades. The latency of ONT B is higher than it would have been had the data been transmitted in the available slots.

This inability to use bandwidth prevents the carrier from earning revenue from the unused bandwidth, which leads to a higher cost per subscriber. As long as the network is not congested and the total upstream bandwidth required to support all ONTs at any given time is less than 1.244 Gbits/sec, the upstream channel available bandwidth is sufficient to service all ONTs with virtually no queuing.

If the unused bandwidth could have been allocated to other ONTs, it could have served the bursty services they run, improved the overall user experience and service level, and lowered the risk of queue overflow at these ONTs.

In contrast, a DBA algorithm enables unused bandwidth by one user to be applied to other users to offer higher-speed connections and better upstream quality-of-service (QoS) parameters to residential and business customers.

As previously noted, the GPON standard provides the tools to implement DBA and leaves the bandwidth allocation scheme open to different implementations. Using these tools, the OLT can allocate bandwidth either per-ONT or per-ONT-per-service (using T-CONTs) and can base the bandwidth allocation on ONT requests, on measuring upstream traffic, or on any combination of the two, taking into consideration the SLA of the subscriber.

Figure 4 shows an example where bandwidth not being consumed by ONTs A and C is applied to other ONTs that request it.

An effective DBA algorithm can adjust quickly to switch the upstream bandwidth allocation to fit changing traffic situations. For any particular time slot, bandwidth can be allocated to a given ONT based on service requirements, SLA priorities, or restrictions and available bandwidth.

Oversubscription, where the supplied bandwidth is higher than the physical capacity of the network, is a key factor in the profitability of access networks. This works practically because it is unlikely that at any given time each subscriber in the network will consume all of the available bandwidth to which they are entitled by their SLA.

Often in cable or DSL networks, providers oversubscribe physical network bandwidth by a factor of 4:1 to 20:1. To enable oversubscription, the network must provide QoS to its subscribers so that application requirements defined in the SLA can be met and priorities set and complied with for a given service.
Figure 3. The first generations of PON used static bandwidth allocation of upstream traffic. This method is easy and cheap to implement but lacks efficiency and support for overprovisioning.

SLAs, a service contract between provider and subscriber, define the bandwidth guaranteed to a subscriber (committed information rate or CIR) and the additional bandwidth available to the subscriber that may become available to the user (the excess information rate or EIR). When a DBA algorithm is used, the bandwidth allocation to an ONT can peak at higher rates, up to the EIR. Bandwidth that is not consumed by some ONTs is allocated to others, resulting in faster network behavior and a better customer experience. This leads to additional revenue to the operator, as statistically there is always excess bandwidth for which the operator can charge.

There are three main areas to consider in evaluating DBA algorithms: latency at the ONT, or the time a packet waits at an ONT before upstream transmission; fairness, defined as the ability to satisfy SLAs for all ONTs; and utilization, or the amount of usable bandwidth in the upstream channel.

Performance in one area affects the others. For example, reduced utilization results in increased latency because the available bandwidth to empty an ONT data queue is decreased, requiring more time to empty the queue. Or, low fairness means that some ONTs will be more slowly served than others, resulting in higher latency for the slowly served ONTs. Of the three, latency is the best reflection of the quality of the DBA algorithm since utilization and fairness are most often tailored to provide the best latency.
Figure 4. Dynamic bandwidth allocation enables unused bandwidth to be applied to users who need it in an efficient manner.

There are two DBA algorithm categor: status reporting (SR) and non-status reporting (NSR). In SR schemes, all ONTs transmit their upstream data queue status to the controlling OLT that uses the information to optimize the upstream bandwidth allocation. The OLT uses the queue information, the SLA of each subscriber, and the application requirement to tailor the appropriate bandwidth for each ONT.

In the NSR scheme, the OLT estimates ONT data queue status based on previous transmission cycles. In normal operation, an ONT transmits idle frames when it doesn't have data to send. The OLT monitors this and decreases the bandwidth allocation to that ONT. When the situation changes, the OLT will increase the bandwidth to a given ONT.

The downside to NSR schemes is that the adjustment of bandwidth takes place in small incremental steps, either up or down, until a "no data" rate is hit or a "maximum data rate" occurs. This approach does not optimize switching between high traffic and low traffic situations. In allocating both more and less bandwidth, the latency is increased and degraded as compared to the optimum point for the given traffic pattern. In all cases, the OLT will either overestimate or underestimate the data queue and yield a less than optimum result.

On the other hand, SR schemes are able to quickly adjust bandwidth based on exact transmission characteristics as reported by the ONTs. This makes SR DBA algorithms superior to all NSR algorithms. With SR-based DBA schemes, oversubscription of 4:1 to 20:1 can be achieved with high upstream efficiency.

An SR DBA implementation will allow oversubscription and provide 5 Gbits/sec and beyond of allocated upstream bandwidth by leveraging the traffic burstiness. Through statistic multiplexing and ONT reports, latency will be reduced by as much as 90%.

Amir Sheffer is a product line manager at PMC-Sierra (www.pmc-sierra.com).

Sponsored Recommendations

AI’s magic networking moment

March 6, 2024
Dive into the forefront of technological evolution with our exclusive webinar, where industry giants discuss the transformative impact of AI on the optical and networking sector...

Scaling Moore’s Law and The Role of Integrated Photonics

April 8, 2024
Intel presents its perspective on how photonic integration can enable similar performance scaling as Moore’s Law for package I/O with higher data throughput and lower energy consumption...

Supporting 5G with Fiber

April 12, 2023
Network operators continue their 5G coverage expansion – which means they also continue to roll out fiber to support such initiatives. The articles in this Lightwave On Topic ...

Advancing Data Center Interconnect

July 31, 2023
Large and hyperscale data center operators are seeing utility in Data Center Interconnect (DCI) to expand their layer two or local area networks across data centers. But the methods...