Resilient network architectures save more than just data

Oct. 27, 2015
By now, most in our industry are familiar with how data centers are designed. Both mass and industry media have extensively covered the lengths engineers go to in order to guarantee uninterruptible service. Given the value of the information they hold, it's no surprise that data centers are designed with such maniacal attention to detail and redundancy.
By now, most in our industry are familiar with how data centers are designed. Both mass and industry media have extensively covered the lengths engineers go to in order to guarantee uninterruptible service. Given the value of the information they hold, it's no surprise that data centers are designed with such maniacal attention to detail and redundancy. Look at power: They have redundant power feeds from redundant local power plants. They have redundant generators in case the redundant power fails. They have massive redundant battery packs to hold up power just in case the redundant generators take too long to start. They even heat the generators' diesel engine combustion chambers 24/7 to guarantee they start at first crank. Look at security: There are huge earthen mounds surrounding the data centers to act as blast shields, concrete barriers inside those, barbed wire fences inside those, and guard gates inside those. And retina and palm scanners are de rigueur. Look at their network cabling: Everything is redundant, at least two to four times over, and cross-cabled. Look at personnel: A staff is kept onsite 24/7, sitting and waiting for a failure to happen. A major theme in grand ole telecommunication was five-nines reliability, meaning 99.999% up time. That translates to less than 5.26 minutes down time per year. And today's data centers meet this mark easily. In an effort to take this resiliency to the next level, as well as scale fast enough to keep up with demand, data centers have migrated to leaf-and-spine architectures that inherently create duplication and redundancy. As they've grown from regional data centers to global networks, they've extended their leaf-and-spine connections to include geographically dispersed data centers. Software like Google's TrueTime and Microsoft's Replication Table then keep multiple copies of data stored across the globe in sync. These new architectures have been so successful, data centers can now absorb a 1-2% failure rate. In fact, an entire data center can go down without too much fuss. But a funny thing's happened in the quest for more nines and greater resilience – localized resiliency has actually started to become less important. While it's well known that these new techniques save data and protect it from loss, less understood is just how much it saves elsewhere. Staffing: If a data center can withstand 1-2% failure rate, it's now enough to have a smaller team working normal hours, no overtime needed. If a server or switch goes down on Saturday, just fix it Monday morning. Security: While the data is still very valuable and privacy must be protected, perhaps just a fence and guard shack will suffice. Power: Instead of massive banks of batteries that maintain power for the entire data center for an hour, maybe onboard battery or capacitive solutions that hold up individual servers for five minutes will suffice. Instead of massive rows of diesel generators, maybe it's enough to just wait for the power company to return power. In fact, some data center operators are running banks of servers in uncontrolled, unsecured environments as an extreme test of just how far they can go. Is it perhaps time for the optical communications industry to do the same -- namely, test just how far it can go? Yes, a few data center operators have been very vocal about loosening up electro-optical specifications, and a few component vendors have started to respond, but any significant cost benefit has yet to be realized. Opening up a spec here and there isn't going to cut it. The resiliency that these new network architectures provide requires an entirely new mindset. When we hear "mega data center," we think something as big as a sports stadium, as secure as a prison, with as many generators as the Capitol. The data center of the future might be closer to an unstaffed building in the warehouse district with a single guard and badge reader at the door, or perhaps a floor in a nearby office tower. Similarly, the optical links inside might be consumer-grade optics, with overclocked lasers, marginal link budgets, and uncleansed ferrules, running bit error rates that would make us cringe in horror today. In hardware engineering circles, there's an old joke when a physical mistake is found: "We'll just fix it in software." Maybe the time has come to depend more upon software and architectural resiliency and really let loose the optical links.Jim Theodoras is senior director of technical marketing at ADVA Optical Networking.

Sponsored Recommendations

Coherent Routing and Optical Transport – Getting Under the Covers

April 11, 2024
Join us as we delve into the symbiotic relationship between IPoDWDM and cutting-edge optical transport innovations, revolutionizing the landscape of data transmission.

Scaling Moore’s Law and The Role of Integrated Photonics

April 8, 2024
Intel presents its perspective on how photonic integration can enable similar performance scaling as Moore’s Law for package I/O with higher data throughput and lower energy consumption...

Data Center Network Advances

April 2, 2024
Lightwave’s latest on-topic eBook, which AFL and Henkel sponsor, will address advances in data center technology. The eBook looks at various topics, ranging...

Constructing Fiber Networks: The Value of Solutions

March 20, 2024
In designing and provisioning a fiber network, it’s important to think of it as more than a collection of parts. In this webinar, AFL’s Josh Simer will show how a solution mindset...