Fiber Forward

Sept. 8, 2010

By Stephen Hardy

Forty years after the invention of low-loss fiber, fiber optics has become the foundation for high-bandwidth communications. What does the future hold?

It may be difficult to imagine that it's been about 45 years since Dr. Charles Kao's pioneering work to create glass optical fiber; meanwhile, 40 years have passed since Corning developed the low-loss fiber technology that made optical communications practical. Fiber has progressed from scientific experiment to the foundation of today's high-speed wireline networks.

Innovation continues today with the next generation of requirements in mind. Current research diverges down two paths: making fiber easier to use and improving its performance.

Long-haul research comes full circle

Initial research on fiber focused on making the technology practical and economically manufacturable. Once attenuation reached 0.2 dB/km, emphasis shifted towards tailoring fiber to enable the technologies–such as WDM, EDFAs, etc.–that would help optical communications meet the growing bandwidth needs of customers (mainly carriers). The generation of dispersion-shifted fibers, including the still popular G.652 non-zero dispersion-shifted fibers (NZDSF), was the result.

"That was a phase where we were not spending much time on the fundamentals of the glass and we were more playing around with the waveguide design," says Dr. Claudio Mazzali, manager, new business and technology development at Corning Inc.

In the long-haul space, the research tide is starting to turn again, according to Dr. Mazzali. "If you look at what's happening now, with all the new generation of systems, the new modulation formats, the digital processing systems they have and the coherent technology they're starting to use for 100G, we're going back to the point where the innovation in optical fiber is going back to the fundamentals of the materials, of the glass," he explains.

That's because coherent detection is expected to handle many of the dispersion issues the previous generation of long-haul fiber addressed. The emphasis, therefore, returned to lowering loss. Corning and others have developed fibers that reduce attenuation from 0.2 dB/km to around 0.17 dB, which translates to 3 dB of additional margin over 100 km.

Data centers offer a promising opportunity for fiber.

And just as systems developers are discussing what comes after 100 Gbps, so too are fiber vendors. For the next rung up the bandwidth ladder, lowering loss may not be enough–and the current generation of fiber may need to give way to a new generation.

"There's a consensus forming that when you go beyond 100G in the backbone, fiber with lower non-linearity will play a key role in enabling future speeds like 400 Gbps or a terabit," states Robert Lingle, director of fiber design and systems research at OFS. "At OFS, we believe that a larger effective area is going to be critical to this new kind of fiber."

Prototypes of such fiber have begun to circulate, with impressive results. For example, AT&T Lab, NEC Labs America, and OFS described in a post-deadline paper this past March at OFC/NFOEC the transmission of 64-Tbps (432x171 Gbps) PDM-36QAM over 320 km.

However, it will be years before the communications community sorts out the right technology for transmission at rates greater than 100 Gbps. Therefore, the commercialization of fiber for such applications is several years down the road, as well. And just because it's available doesn't mean it will be deployed quickly.

"They're going to move toward this kicking and screaming," says David DiGiovanni, CTO at OFS Labs, of carriers' interest in a new fiber generation. "Because this is going to require serious capital outlay. And they're going to want to defer that as long as they can."

In addition, these new fibers will have to mate with fielded infrastructure; the resulting mode field mismatches will require a solution. "That could be something as simple as modification of splicing programs," DiGiovanni offers. "It could be a little bit more profound than that."

A better handle on fiber

Ryan Chappell, of Draka Commun-ications' North American operations, agrees that companies such as his have several years to figure out how to support requirements beyond 100G. In fact, he has yet to see much effect on fiber deployments from coherent technology.

"We're scratching our heads because we would expect that coherent detection would decrease the need for non-zero dispersion-shifted fibers. And yet, we don't see a change really," Chappell says, adding the caveat that coherent technology is just reaching the field and, therefore, may take a few years to have an effect.

Thus, Chappell foresees a nearer term focus on improving fiber handling, particularly continued advances in bend insensitivity for both multimode and singlemode fiber. Such fibers save costs beyond speeding installation. "It's allowing for reduced cable diameters, which then allows for all kinds of interesting things: reduced cost of duct rental space, fitting more cables in the same duct," Chappell explains.

While questions swirl around bend-insensitive multimode fiber and its adherence to standards (see the related article in our Analysis section), Chappell anticipates near-term opportunities for the technology. "Maybe it's not as much a pull from operators in terms of the need for multimode fibers which have lower bend radii per se, but more so in the cable designs," he says. "This allows for better cable designs, reduced-cost cable designs, smaller cable designs, and smaller cable management maybe coming right behind it."

All sources agree that bend-insensitive singlemode is well established in applications such as fiber to the home. But the severity of the requirements in FTTH and enterprises pales in comparison to that of emerging applications in consumer electronics.

"Fiber to the home was about having an optical fiber that could behave like copper cable. For consumer electronics, it's even worse," says Corning's Mazzali. "People will be carrying optical fiber cables in their pockets."

They also will be installing cables themselves in the home–perhaps using optical technology similar to Intel's Light Peak.

"I think that the requirements are not totally clear yet," Mazzali offers. "I think that's a hot spot in the industry right now that people are trying to understand."

It appears that there actually are several hot spots for fiber research. The technological curve that began more than 40 years ago shows no signs of slowing its upward path.

Through a glass brightly

Dr. Donald Keck was a member of the team at Corning that developed the first low-loss optical fiber in 1970, an advancement that made fiber practical for communications applications. In an article on Lightwave Online, Dr. Keck tells the story of this achievement:

When the laser beam hit the fiber core, I was suddenly blinded by a bright returning laser beam. It took me a moment to realize the laser was being retro-reflected off the far end of the fiber and coming back through my system. With considerable anticipation I measured the fiber loss and, to my delight and surprise, it was 17dB/km. I registered my delight in my now well-known lab-book entry … "Whoopee!"

Sponsored Recommendations

From 100G to 1.6T: Navigating Timing in the New Era of High-Speed Optical Networks

Feb. 19, 2024
Discover the dynamic landscape of hyperscale data centers as they embrace accelerated AI/ML growth, propelling a transition from 100G to 400G and even 800G optical connectivity...

Supporting 5G with Fiber

April 12, 2023
Network operators continue their 5G coverage expansion – which means they also continue to roll out fiber to support such initiatives. The articles in this Lightwave On ...

Advancing Data Center Interconnect

July 31, 2023
Large and hyperscale data center operators are seeing utility in Data Center Interconnect (DCI) to expand their layer two or local area networks across data centers. But the methods...

Scaling Moore’s Law and The Role of Integrated Photonics

April 8, 2024
Intel presents its perspective on how photonic integration can enable similar performance scaling as Moore’s Law for package I/O with higher data throughput and lower energy consumption...