Is It Best to Build or Buy DCI in the Cloud Era?

Data center consolidation, software-defined networking, the move to virtualized IT models, and the cloud are all prompting discussions about whether it is best to build or buy data center interconnect (DCI) in the cloud era. A growing number of large organizations are evaluating whether to build their own private DCI to support private cloud. For such organizations, what is the business case to make the strategic investment to build their own DCI networks, and how is the cost justified when compared to managed DCI services?

Content Dam Lw Online Articles 2017 01 Lwnokiafig3010417

Data center consolidation, software-defined networking, the move to virtualized IT models, and the cloud are all prompting discussions about whether it is best to build or buy data center interconnect (DCI) in the cloud era.

For revenue-generating communications service providers (CSPs), web-scale internet content providers (ICPs), and large carrier-neutral colocation providers (CNPs), it makes economic sense to build DCI infrastructure to connect their data centers. All are seeing rapid growth of DCI traffic driven by distributed cloud applications. Growth is most acute in metro areas, where deploying data centers closer to customers ensures optimum cost/performance.

Both CSPs and web-scale ICPs are investing heavily in metro and long-haul transport networks to support DCI. CSPs also offer a range of managed DCI services to all players in the cloud ecosystem, including smaller ICPs, CNPs, and their traditional enterprise customers (see Figure 1).

Lwnokiafig1010417
Figure 1. The cloud ecosystem.

Meanwhile, many large enterprises and organizations – particularly in finance, healthcare, government and the public sector – have concluded that taking full advantage of new agile, flexible, and dynamic cloud IT models requires a re-evaluation of their current DCI approaches.

These organizations typically use managed DCI services between their data centers for applications such as business continuity and disaster recovery (BCDR). But managed DCI services can prove inflexible, costly to scale, and take too long to provision for agile, bandwidth-hungry cloud applications.

In addition, many organizations need improved control over their DCI, particularly for latency-sensitive, business-critical cloud applications. They also have data compliance, integrity, security, and sovereignty concerns to address.

For these reasons, a growing number of large organizations are evaluating whether to build their own private DCI to support private cloud. For such organizations, what is the business case to make the strategic investment to build their own DCI networks, and how is the cost justified when compared to managed DCI services?

Private DCI build basics

Organizations can build private DCI by leasing or buying dark fiber between their data center locations from a fiber provider and installing and managing their own optical DWDM equipment. This approach provides scalable bandwidth with the best cost/performance ratio and the lowest latency compared to a managed wavelength or Carrier Ethernet service. The advantages and benefits of using dark fiber include:

  • The ability to lease existing fiber and ducts from a dark fiber provider. For many locations, there is usually no need to lay or pull new fiber between data centers, although there may be a cost to connect premises to the nearest fiber trunk. However, this cost is often low when using existing local fiber access infrastructure, such as off-net laterals or local loops.
  • A choice of diverse fiber routes with the option of single-drop or dual-drop connections into premises. Redundant fiber paths with a single point of entry, or redundant fiber paths with redundant points of entry, significantly increase resilience but at additional cost.
  • Access to a wide range of intra- and inter-city connectivity from dark fiber providers or CSPs. In many locations, particularly cities, there is often a choice of dark fiber providers as well as service providers that offer metro, regional, and national reach.

Increased competition among fiber providers means that, in many areas, the cost of leased dark fiber has decreased. Also, fiber providers have become more willing to offer shorter contracts and more attractive support options.

In many cases, dark fiber can be paid for over a set contract term using monthly recurring charges rather than paying the cost of the full contract term up front. Of course, there are regional variations, and the availability of dark fiber varies from region to region and country to country.

Just how much bandwidth is required for a private DCI build?

In most cases, today's DCI applications require bandwidth of between 1 and 10 Gbps. However, when planning for a private DCI build to support a private/hybrid cloud strategy, it is important to determine how much bandwidth will be needed for future cloud IT needs.

DCI bandwidth is likely to grow significantly because of data center consolidation and growth in inter-data center traffic (see Figure 2). Data center consolidation drives convergence of data center architecture. Upgrading enterprise servers with 40G network interface cards and upgrading switch fabrics with 100G uplinks increase traffic within the data center. But such upgrades also increase traffic between data centers.

Lwnokiafig2010417
Figure 2. How much bandwidth is required for DCI?

The graph on the left of Figure 2 shows a simple example of a consolidated enterprise data center with 100 servers with 40G NICs. Just 1% of data exiting the data center generates 40G of DCI traffic. Nokia Bell Lab's most recent study of metro traffic, shown in the graph on the right, indicates a 430% increase in metro DCI traffic from 2015 to 2020.1 Clearly traffic will increase significantly due to data center consolidation and the move to cloud, and organizations are going to need more DCI bandwidth than they may realize.

What are the benefits of dark fiber for a private DCI build?

Organizations can achieve a number of benefits by using dark fiber for a private DCI build. These include:

  • Cost savings and efficiency improvements. Consolidating data centers, reducing the number of servers and implementing virtualization results in both significant operational expense (opex) savings and improved server use.
  • Greater scalability and resiliency. The use of leased dark fiber with diverse paths and dual points of entry into the premises significantly increases DCI network performance and resilience as well as availability of applications, services, and data.
  • Improved control and security. Installing and managing DWDM equipment gives organizations complete control over their DCI, as well as much higher bandwidth and the ability to easily increase bandwidth at marginal incremental cost.

When does it make sense to use private-build DCI?

For bandwidths of 10G and less, the monthly recurring cost of a managed DCI service based on Carrier Ethernet or managed wavelengths may be less than a dark fiber approach. However, for bandwidth in the range of 10G to 40G, private-build DCI becomes cost-effective, as managed DCI services become cost-prohibitive at bandwidths greater than 10G. For bandwidths greater than or equal to 40G, a private DCI build using leased dark fiber can be the most economical choice.

This conclusion is particularly the case when consolidating and virtualizing data centers. Cloud demands higher bandwidths, and the cost of linking fewer data centers at higher bandwidths using dark fiber is often significantly less than the cost of connecting multiple data centers using lower-bandwidth managed DCI services.

And private-build DCI makes even more sense when considering future bandwidth needs. With managed DCI services, CSPs may not be able to provision additional bandwidth where and when needed, or it may be cost-prohibitive to do so. With dark fiber, organizations can provision additional bandwidth easily at marginal incremental cost by lighting additional wavelengths on the existing DWDM equipment.

A business case for private DCI

Consider the business case example, shown in Figure 3, that compares a prior mode of operation (PMO) of a large organization using a managed DCI service with a future mode of operation (FMO) using a private DCI build with dark fiber. The PMO, the managed DCI service, comprises two point-to-point 100G Ethernet links for resilience; the FMO, a private DCI build, comprises diverse dark fiber paths for resilience and purchased and self-managed 100G DWDM equipment with redundant hardware.

Content Dam Lw Online Articles 2017 01 Lwnokiafig3010417
Figure 3. Incremental cash flows generated by the FMO.

In the figure, the red line shows a cumulative view of the discounted cash flows the FMO generated. Below the zero axis, the red line indicates where initial investment is required. At the point where the red line crosses the zero axis, the FMO has generated as much cash flow as the PMO would have. The FMO then generates higher cash flows from this point forward compared to the PMO.

Figure 3 also shows the incremental cash inflows and outflows, with the blue bars showing investments (capex) and the grey bars showing expenses (opex):

  • The capex includes the costs of the DWDM hardware and software the project requires. Peak investment occurs in the first quarter of the project for initial deployment; the model assumes additional investments during the second and third quarters to accommodate bandwidth growth.
  • The opex covers the operating costs, including one-time initial set up costs for connections to the local fiber access infrastructure. Opex also includes monthly fiber maintenance and support costs, hardware and software maintenance costs, and ongoing support costs.

The grey bars in Figure 3 above the zero axis indicate the FMO generates net opex savings from the first quarter, with a breakeven point in the fourth quarter. Reducing opex further – for example by using a network integrator to provide hardware and software maintenance and network support – results in higher cash flows and a reduced payback period.

As the capex assumptions for the FMO include the cost of hardware to accommodate future needs, adding bandwidth to cope with future growth can be achieved easily at marginal incremental cost by lighting additional wavelengths. In the PMO, additional managed Ethernet or wavelength services to increase bandwidth would incur significant additional cost – assuming that the additional capacity can be readily provided.

References

1. "Metro Network Traffic Growth: An Architecture Impact Study," September 2015 update to Bell Labs study published December 2013.

Gary Holland is director, verticals marketing, IP/Optical Networks, at Nokia. Holland is responsible for marketing Nokia's IP and Optical Networks (ION) portfolio to enterprise, industries, government, and public sector verticals, both directly and through Global Alliance partners. With more than 25 years' experience in the telecommunications industry, Gary has held senior roles in corporate, portfolio and product marketing, partner and business development, and product line management with technology companies including Alcatel-Lucent, Riverstone Networks, and Digital Equipment.

More in Data Center Interconnectivity