Three decades of innovation

Jan. 22, 2014
Here's a look back at some of the technological highlights Lightwave has covered over the past 30 years.

By Pauline Rigby

Nineteen eighty-four was a significant year for telecommunications.

The year started with a bang when on January 1, 1984, the Bell System in the U.S. was broken up and in its place emerged seven regional operating companies and a new AT&T providing long distance operations. Later that same year, British Telecom (BT) was privatized with more than 50% of its shares being sold in what was the largest share issue in the world at the time. The U.K. government also stripped the incumbent of its exclusive rights to run telephone services and laid the ground rules for a competitive telecom market. Monopolies were out; competition was in.

There were early indications that thin strands of glass would one day encircle the globe. Canadian service provider SaskTel had completed construction of what was then the world's longest commercial fiber-optic network, which covered 3,268 km and linked 52 communities, a considerable achievement at a time when most fiber-optic links were less than 10 km long. The optical system – developed by Bell-Northern Research (BNR), the Ottawa-based research and development arm of Nortel – worked at 850 nm and required repeaters every 5 to 9 km.

We couldn't have predicted the many innovations in fiber optics when we published our first issue in January 1984.

The basic building blocks of the Internet were also falling into place. The Domain Name System, which makes it possible to use a simpler, more memorable name in place of a host's numerical address, was first implemented. Additionally, Cisco Systems was founded in 1984 by two members of Stanford University's computer support staff to develop multiprotocol router products that would enable disparate networks to talk with each other and share information reliably.

Against this backdrop, Lightwave magazine was founded.

While fiber-optic communications had clearly arrived by 1984, there was still plenty of work to do. Let's take a look at some key technological developments that have shaped optical communications over the last 30 years.

Transatlantic fiber-optic cable

Coaxial submarine cable systems had reached their limits with transatlantic telephone cables TAT-6 and TAT-7. To add more capacity meant adding more frequencies, which made the cable heavier and the repeaters closer together. Submarine cable developers were keen to reduce the number of repeaters, which were expensive and represented potential points of failure. That led them to look at the possibility of using singlemode fiber and optics at 1310 and 1550 nm, rather than earlier technology generations based on graded-index fibers and 850-nm lasers.

A decade in the planning, a consortium of companies led by AT&T, France Télécom, and BT finished laying TAT-8 by 1988. The cable landed in Tuckerton, NJ; Widemouth Bay, England; and Penmarch, France. TAT-8 contained two singlemode fiber pairs with 1310-nm signals carrying 280 Mbps of data – nine times the capacity of TAT-7. The demand for international calling exploded, exhausting the cable's capacity of 7,680 telephone circuits soon after its installation. TAT-9 was installed in 1991, with double the capacity of TAT-8. The use of 1550-nm wavelengths in TAT-9 made it possible to increase repeater spacing to 120 km. TAT-8 was eventually retired from service in 2002.

EDFAs made long-reach WDM networks possible. Packaging had improved since the first modules to this one from 2006. EDFA modules now come combined with Raman amplification capabilities to address 100-Gbps requirements.

Erbium-doped fiber amplifiers

Scientists originally saw optical amplifiers as a step toward building a laser. Based on their work on solid glass lasers, Elias Snitzer and colleagues at American Optical in the U.S. described the first working fiber laser and amplifier in 1964, but its application wasn't immediately obvious.

Snitzer's early device was based on neodymium, a rare-earth metal. Several decades later, David Payne and his team from the University of Southampton in the U.K. were looking to make better fiber lasers and decided to try doping them with different rare earths to see what happened. "It took us 26 publications on fiber lasers before we realized that if we took the mirrors off and looked at what the gain was...we'd have a huge gain of 30 dB," Payne told science historian Jeff Hecht. By sheer good fortune, adding erbium to silica optical fibers produced amplification in the low-loss optical window at 1550 nm.

Payne's team demonstrated their EDFA in 1986, but there was much work to do to make the device commercially viable. Southampton's prototype EDFA was pumped by an argon laser, a laboratory-based apparatus that required hundreds of volts to operate. Interest in the field nevertheless blossomed, encouraged by the results from Southampton as well as Emmanuel Desurvire and colleagues at AT&T Bell Labs. In 1989, Desurvire's group reported a crucial experiment showing that EDFAs could handle signals at multiple wavelengths without crosstalk between channels. That same year, Japan's NTT developed a practical device pumped by semiconductor diode lasers.

DWDM systems

Conventional wisdom has it that EDFAs unlocked the potential of wavelength-division multiplexing (WDM), so it may come as a surprise that commercial DWDM systems reached the market many years later than amplifiers.

The idea of sending digital information over multiple wavelengths simultaneously was not new. AT&T Bell Labs picked three widely spaced channels – semiconductor lasers at 825 and 875 nm and an LED at 1300 nm – in the design of its Northeast Corridor backbone network, covering 611 miles from Boston to Washington, D.C., which was turned up in 1984.

However, singlemode fiber transmission based on 1550 nm quickly eclipsed the capacity of shorter-wavelength systems. Bell Labs turned its attention to the longer wavelengths and in 1995 demonstrated the first commercial WDM system with four channels.

But as is often the case, parallel developments were taking place. Optical systems startup Ciena was working with U.S. carrier Sprint to develop a high-capacity fiber-optic transmission system. On March 28, 1996, Ciena announced the MultiWave 1600, an optical system capable of transmitting up to 16 discrete optical channels over one fiber pair. A few months later the company announced that Sprint was using the dense WDM system to increase its network capacity by a factor of 1600.

According to Joe Berthold, vice president of network architecture at Ciena, AT&T's system was essentially four SONET systems stacked one on top of the other, both on the fiber and physically in the equipment room. The MultiWave 1600 platform also was the first transceiver-based DWDM networking system, he says. Separation of DWDM transmission from SONET-based multiplexing made it possible to simplify networks by allowing data switches and routers to be directly connected across networks without expensive intervening SONET equipment.

Intelligent networking

A lack of standards for optical networks inevitably led to a proliferation of proprietary interfaces. SONET (and SDH) was created to define a synchronous frame structure for multiplexed digital traffic that would enable equipment from different manufacturers to interoperate, with standards completed in 1988.

As network traffic exploded, keeping track of the data became much harder, and the emergence of DWDM only compounded the problem. There was a dramatic increase in the size of optical switching systems and a corresponding increase in the length of time it took to reconfigure a route in the event of failure. A SONET OC-48 (2.5G) container carried 48 DS3 circuits, each of which required rerouting to its destination. And an optical switch supported as many as 128 DWDM wavelengths, each carrying an OC-48. Doing that the old- fashioned way, where a centralized controller took several seconds to reconfigure each connection, simply wasn't feasible anymore.

Carriers needed to manage and switch all this new optically enabled bandwidth, but they needed a new network architecture to do it. In 1999, startup Lightera Networks launched an intelligent optical switch that was able to automatically set up and restore wavelengths over mesh networks. It relied on a distributed optical control plane, which was automatically updated on the state of the network, and the recovery path was pre-computed, speeding up switching times significantly. In 1999, Ciena acquired Lightera and its CoreDirector switch for $500 million.

The team that developed 40G/100G coherent transmission capabilities for Nortel's OME 6500 playfully plug in.

Coherent transmission

Coherent transmission had been explored in the 1980s, but abandoned because EDFAs provided a simpler, less expensive way to increase system reach. But as data rates increased to 40G, optical-fiber impairments became problematic. Nonlinear optical effects such as polarization mode dispersion (PMD) get worse with the square of the symbol rate and curtail the reach of the optical signal. Vendors offered a number of modulation options – different from typical nonreturn-to-zero on/off keying but still based on direct detection – to mitigate PMD, however, the commercial results were mixed at best.

Nortel, which in 1999 had managed to achieve 80-Gbps serial optical transmission under laboratory conditions, assembled experts throughout the company to tackle the problem. The team included engineers from the company's high-capacity microwave development group as well as specialists in optical devices and high-speed electronic-circuit design.

The team looked at complex modulation schemes from areas outside of optical communication that encoded more than one bit of data per symbol. The resultant approach relied on electronic algorithms to compensate for optical impairments, even on installed fibers that had been declared unsuitable for 10G transmission.

On March 12, 2008, Nortel officially unveiled the first coherent 40G optical transport system. The following year, however, Nortel slid into bankruptcy and its optical division – including the coherent technology – was purchased by Ciena (which clearly was developing a knack for this sort of thing). Today, coherent optical technology is the foundation of the industry's efforts to boost transport speeds past 100G and toward terabits.

VCSELs

The vertical-cavity surface-emitting laser (VCSEL) emits light out of its top surface instead of a cleaved end facet – a simple distinction that lends it quite different properties. Optics developers like the circular, low divergence beam and the fact that the device can be fabricated and tested in arrays, which slashes manufacturing costs.

VCSELs have succeeded at 850 nm, leading to several attempts to develop devices that would operate at longer wavelengths. This photo taken in 2000 shows Sandia National Laboratories researcher John Klem next to the molecular-beam epitaxy system used to grow the crystal structure of the 1300-nm VCSEL the labs were developing in partnership with Cielo Communications. Work continues on creating VCSELs for 1500-nm applications as well.

Honeywell Technology Center carried out the first study to demonstrate that VCSELs had the reliability required for applications in data-center networks, which was published in 1996, the same year it launched the commercial devices. Other manufacturers, such as Hewlett-Packard (now Avago Technologies) and Mitel (now Zarlink) soon joined in the effort, and at one point during the dot-com boom there were more than 30 suppliers.

VCSELs have enabled the optical interconnect market that we know today, says Craig Thompson, director of strategic marketing at Finisar, which acquired Honeywell's VCSEL group in 2004. VCSELs fulfilled the market requirement for highly reliable, low-cost, fast interconnects for LANs and data centers.

Fiber to the home

Based on his observations between 1984 and 1998, Jakob Neilsen coined the law of Internet bandwidth, which states that an end user's Internet connection speed increases by 50% annually. Twisted-pair telephone cable was never designed to carry high-speed data signals, so it was only a matter of time before operators pondered replacing copper in the last mile with optical fiber.

In the 1980s, British Telecom had already begun experimenting with installing optical fiber directly to homes and using it to deliver digital video. Other countries around the world, notably Japan and the U.S., were toying with similar ideas. However, installation costs were high and demand for services was not assured. The technology worked, but the market wasn't ready.

Over the next decade or so, bandwidth continued to grow exponentially while engineers tackled the cost side of the equation. Passive optical networks were developed to reduce the cost of expensive optical transmitters by splitting the signal between 8, 16, or even 32 homes. New installation techniques such as blown fiber and micro trenching also helped drive down costs, while simple inventions like pluggable connectors made the technician's job much easier.

Starting in 2001, Japan's NTT blazed the trail when it committed to bring optical fiber within reach of virtually every Japanese household by 2010. In the U.S., Verizon's FiOS fiber program got underway in 2004. Today, FTTH is available to some 200 million households – roughly one in every 10 homes on the planet. A few countries – Japan, the United Arab Emirates, and Lithuania – are approaching near universal access to FTTH, but elsewhere the debate about the business case still rages.

PAULINE RIGBY is a freelance writer and blogger, who specializes in writing about optical components and systems and fiber to the home. Her background includes a tenure as editor-in-chief of FibreSystems Europe magazine as well as writing for Light Reading.

More LW Articles
Archived Lightwave Issues

Sponsored Recommendations

Scaling Moore’s Law and The Role of Integrated Photonics

April 8, 2024
Intel presents its perspective on how photonic integration can enable similar performance scaling as Moore’s Law for package I/O with higher data throughput and lower energy consumption...

Coherent Routing and Optical Transport – Getting Under the Covers

April 11, 2024
Join us as we delve into the symbiotic relationship between IPoDWDM and cutting-edge optical transport innovations, revolutionizing the landscape of data transmission.

Constructing Fiber Networks: The Value of Solutions

March 20, 2024
In designing and provisioning a fiber network, it’s important to think of it as more than a collection of parts. In this webinar, AFL’s Josh Simer will show how a solution mindset...

Moving to 800G & Beyond

Jan. 27, 2023
Service provider and hyperscale data center network operators are beginning to deploy 800G transmission capabilities – but are using different technologies to do so. The higher...