NRL demos InfiniBand all-optical network

Th Ln2000 Prodshot1

At the OFC/NFOEC conference in Anaheim, CA, last month, the U.S. Naval Research Laboratory (NRL-Washington, DC) demonstrated a 10-Gbit/sec transcontinental network that featured all-optical technology and the industry’s largest InfiniBand WAN (IB WAN) implementation to date. While the demonstration was designed with government and military applications in mind, commercial applications could benefit as well.

Derived from the term “infinite bandwidth,” InfiniBand was developed as a high-speed short-range interconnect between servers, storage devices, and network elements typically in a computer room or data center. It is often used to connect larger farms of systems into a grid computer, notes Dr. Henry Dardy, chief scientist in the NRL’s Information Technology Division.

“The problem with the DoD [Department of Defense] is the computing has to be accessible to someone who may not be at the site, so we started to look at ways to extend access to these assets as though you were anywhere in the globe,” he reports. In collaboration with Lambda OpticalSystems, YottaYotta, and Obsidian Research, NRL researchers developed the IB WAN.

“We have taken the InfiniBand protocol today at 4X, which is 10-gig, and we have extended it over OC-192c to bridge sites from the East Coast to the West Coast,” Dardy explains. “So the very InfiniBand protocol that is now starting to take off in the industry is not just limited to a computer room or even a campus.”

The OFC IB WAN demonstration extended nearly 5,000 km over the NRL’s existing all-optical testbed, which stretches 800 km from MIT in Cambridge, MA, to Washington, DC, where it touches the NRL, the Defense Information Systems Agency, Defense Advanced Research Projects Agency, Defense Intelligence Agency, and NASA. The NRL had previously demonstrated IB WAN over 1,200 km between Boston, Washington, DC, and Pittsburgh.

Though the demonstration featured 10-Gbit/sec speeds, Dardy hopes to move to 40 Gbits/sec in the near future. “There’s a double data rate and a quadruple data rate that is defined right now,” he says, “so 10-gig is just a starting point.”Th Ln2000 Prodshot1

The new LambdaNode 2000 switch platform from Lambda OpticalSystems enabled the U.S. Naval Research Laboratory to stream high-definition TV and large-scale InfiniBand-empowered remote visualization applications entirely in the optical domain.

The NRL worked with SAN platform vendor YottaYotta (Edmonton, Alberta) to add cache coherent storage to build a large file system on its 10-Gbit/sec network. “The file system can be distributed across sites, it can be mirrored, or it can be run remotely,” reports Dardy. “Whatever is requested at one site looks like it is available as if it were all there, even though it might be across two or three different sites in our network.” The NRL calls this application “virtualization,” in which multiple computers, regardless of their physical location, are linked to perform a single task or program as if they were all part of a single system.

In addition to virtualization, the NRL also demonstrated what it calls large data visualization. “In this case, you’ll see cities’ data and the ability to zoom in-what we call ‘space to face’-going right on down to high resolution,” he explains. Such capabilities are applicable for homeland security applications, holistic target analysis, agricultural planning, and even the commercial real estate market.

The NRL also carried generic Ethernet circuits for other applications and used bandwidth to demonstrate teleconferencing in real time. “That’s our way of at least showing this next generation of thinking that we’ve been calling the Global Information Grid,” says Dardy.

But such applications demand a sophisticated next generation network. “When you have large numbers of satellites, for example, that are delivering large quantities of streaming video to earth, the ability to capture all this information, store it, move it around terrestrially, and then be able to act on it in real time assumes the existence of an underlying network infrastructure that can do this,” contends Irfan Ali, president and CEO of Lambda OpticalSystems (Reston, VA). “And that underlying network infrastructure has to be optical; you can’t do it with the existing options.”

The NRL tapped Lambda OpticalSystems to provide the all-optical portion of its network, which includes the deployment of two LambdaNode 2000 intelligent all-optical switches. Remaining in the optical domain is critical, says Ali, because each OEO conversion is expensive and affects the performance of the network. “Every conversion has latency associated with it, and for government applications, every single time you have an OEO conversion, you are introducing a breach point, from a security perspective, in the network,” he explains. The LambdaNode 2000 transports and switches traffic entirely as light. The switch also features a GMPLS control plane, enabling customers, including the NRL, to dynamically provision their networks.Th Tasr40

Kailight Photonics’ TASR tunable all-optical signal regenerator provided all-optical 2R (reamplification and reshaping) signal regeneration and wavelength conversion for the all-optical portion of the U.S. Naval Research Laboratory’s 10-Gbit/sec InfiniBand demonstration.

The addition of Kailight Photonics’ (Dallas, TX, and Rehovot, Israel) TASR tunable all-optical signal regenerator doubled the distance that NRL’s traffic could be carried all-optically from 800 to 1,600 km. Currently in alpha testing, the TASR is a all-optical 2R signal regenerator; it reamplifies and reshapes the signal while also providing dispersion compensation.

“We are also at the same time doing wavelength conversion to show them that we can swap wavelengths if they need that capability as well,” reports Kailight vice president of marketing and business development Neil Salisbury. With the NRL demo, “you have optical switching, wavelength conversion, and 2R regeneration as well as PMD and dispersion compensation all being done together in a single network application using live traffic.”

For the OFC demonstration, only the 800-km link between Boston and Washington, DC, featured all-optical transmission; to extend its network across the continental U.S. to Anaheim, the NRL ran the signal over standard networks. Qwest provided access to one of its circuits, which enabled the NRL to carry traffic from its local point of presence (PoP) in Washington, DC, to a PoP in Los Angeles. The signal was then carried from Los Angeles to Anaheim on fiber supplied by XO Communications.

The NRL is not in the business of commercializing products. Instead, it attempts to understand a technology, push it to its limits, and develop research collaborations with vendors that will then be responsible for commercialization. In this case, says Dardy, the NRL hopes to “get people interested in the fact that we can drive data over long distances at these very-high-performance rates so we can start to build large file systems and access large file systems very flexibly.”

The OFC demonstration was not just about bandwidth, he maintains. Rather, it was demonstrating what he calls “ ‘time bandwidth product.’ It’s not only using the bandwidth that we had to demonstrate,” says Dardy. “It’s the ability to provide these quality of service end-to-end connections with the lowest possible latency, which ultimately comes down to time bandwidth product, which we’ve renamed ‘professionalism.’

“If you’re a doctor and you want to get an X-ray of a patient who is sitting in your office, you’ll use a digital system if you can hit the carriage return and the X-ray comes up. Otherwise, you won’t use it. We’re trying to grow the next generation of thinking where these large amounts of data are available at your fingertips. We think [the technology] is ready, and we’re trying to demonstrate that.”Th Mfuller1

Meghan Fuller is the senior news editor at Lightwave.

More in High-Speed Networks