AT&T: Operators should take more responsibility for their network technology

Sept. 1, 2017
To fully enable the benefits and the ecosystem that will create open, white box technology, service providers must be willing to take greater responsibility for network element and functionality creation and implementation, AT&T leaders assert in a blog posted earlier this summer.

Several network operators, both in the data center and carrier realms, have embraced the emerging ecosystem of open interfaces and white box hardware. Such network technology development approaches, which see the separation of hardware and software, can lead to greater network flexibility, faster service deployment, and lower cost, they believe. AT&T is one such operator (see, for example, "AT&T field trials open source white box switches" and "AT&T to trial 400 Gigabit Ethernet in 2017"). But to fully enable such benefits and the ecosystem that will create the necessary technology, service providers must be willing to take greater responsibility for network element and functionality creation and implementation, AT&T leaders assert in a blog posted earlier this summer.

"We think white box will play a big part in the future of our industry. Our goal is to help this ecosystem grow in a way that benefits everyone," wrote Chris Rice, senior vice president of AT&T Labs, and John Medamana, vice president, network platforms at AT&T. "To do that, telecom companies need to get comfortable with taking more responsibility for the technologies powering their networks."

Rice and Medamana see an opportunity for service providers to evolve from being "professional buyers" to active participants in the development of the network technology that best meets their needs. That participation can take one of three forms, they write:

  1. They can assume almost total ownership of the process, establishing their requirements, acquiring best-of-breed hardware and software components that optimally meet those requirements, then performing the integration themselves.
  2. They can control the design and specification of the necessary hardware and software modules, then use third-party integrators for such functions as manufacturing, integration, and maintenance.
  3. They can specify their element and feature requirements and partner with a vendor who can meet them.

In all three scenarios, open interfaces represent an essential feature that enables the network automation operators will require, Rice told Lightwave. "If you look at what the webscale folks have done, achieved the levels of automation that they've achieved. it really flows back from a chain that starts with open interfaces," he said. "Open interfaces allow you to collect the data you need. Collecting the data you need drives analytics. Analytics lead to insights. Insights within a closed loop leads to automation. Within a closed loop, having machine learning leads to automation that's smart learning."

A software-defined networking (SDN) environment best enables such automation, Rice believes. "Obviously to get programmability, to collect data and do automation, you have to have a feedback loop. So you've got to effect something with the data you collect, so you kind of need that programmability part. So you certainly need some kind of SDN," Rice explained. "I think it gives more flexibility when you combine SDN and NFV [network functions virtualization] but I wouldn't say both are a prerequisite. But I would say SDN is a prerequisite."

With a sufficiently robust ecosystem, operators can use any of the three technology development scenarios that best fits a particular scenario. "I think part of it might be the maturity of the ecosystem, how it grows up," Rice said in explaining how he believes AT&T will decide which option to use. "I think part of it depends on the comfort level of the particular team with the technology and how best to go do it. I think that there are going to be natural breakpoints."

Any operator can follow a similar blueprint, Rice and Medamana write. An open ecosystem that promotes the development and use of white box, disaggregated hardware comprises four elements, they assert:

  1. Hardware Layer 1, which comprises merchant silicon
  2. Software Layer 1, which encompasses silicon interfaces that enable features to be abstracted and presented to the higher layers of the stack
  3. Hardware Layer 2, for which several network function layer reference models have appeared that original design manufacturers (ODMs) can use to deliver white box products
  4. Software Layer 2, which includes network operating systems and associated protocols.

Rice believes progress is begin made in creating robust, reliable options at each layer for service providers. If any area is a bit behind, it's Software Layer 1. "I think the place where probably maturing needs to occur is the kind of middle layer around the software, which is how do you build something that is fairly open and programmable northbound but can adapt easily southbound to the different ASICs and different SDKs and different merchant silicon that is out there," Rice explained. "I think that is a problem that a lot of folks have tried to solve for some time. I think there is a renewed interest in that, and I think we will start to see more and more options for that."

A host of new players, including ODM houses, have come forward to support the open, white box ecosystem. At least some traditional vendors have as well, Rice reported. "I think some are more active than others. I think the ones who you are hearing about who are willing to disaggregate their own solutions and separate the hardware from the software and build hardware for other people to put their own software on or put their software on top of hardware that maybe is being built by an ODM or something like that -- those are the ones who see the sense of it alongside their existing business models."

Those technology vendors who continue to resist the open white box evolution do so at their peril, if Rice is correct in his vision of the future. "In the end the efficacy of their hardware and efficacy of their software have to stand alone, and they have to be shown to be better together," Rice said. "Even if AT&T stopped doing it or someone else stopped doing it – we think the ecosystem will emerge and it will happen. It's a reality that [traditional vendors] have to deal with – and I think some have come to that conclusion maybe sooner than others."

For related articles, visit the Optical Technologies Topic Center.

For more information on high-speed transmission systems and suppliers, visit the Lightwave Buyer's Guide.

Continue Reading

Sponsored Recommendations

Coherent Routing and Optical Transport – Getting Under the Covers

April 11, 2024
Join us as we delve into the symbiotic relationship between IPoDWDM and cutting-edge optical transport innovations, revolutionizing the landscape of data transmission.

Constructing Fiber Networks: The Value of Solutions

March 20, 2024
In designing and provisioning a fiber network, it’s important to think of it as more than a collection of parts. In this webinar, AFL’s Josh Simer will show how a solution mindset...

Scaling Moore’s Law and The Role of Integrated Photonics

April 8, 2024
Intel presents its perspective on how photonic integration can enable similar performance scaling as Moore’s Law for package I/O with higher data throughput and lower energy consumption...

Data Center Network Advances

April 2, 2024
Lightwave’s latest on-topic eBook, which AFL and Henkel sponsor, will address advances in data center technology. The eBook looks at various topics, ranging...

New

Most Read

Sponsored