Standards and product suppliers interact to ensure conformance
Standards and product suppliers interact to ensure conformance
Equipment and operations support suppliers balance their products to meet changing Sonet standards while satisfying existing customer needs
sunil G. joshi
bell Communications research
It is considered critical by the telecommunications industry that equipment and operations systems suppliers adhere to synchronous optical network standards to obtain network interoperability and flexible bandwith management, as well as automated and timely operations, administration, maintenance and provisioning activities.
Completion of the Phase 1 Sonet standard triggered the introduction of related products. At that time, full conformance to the standard was not possible and was not the key factor for deployment decisions. Telephone companies, as well as equipment suppliers, gained experience through field trials, first office applications and deployment.
As industry requirements were refined, suppliers determined the relative importance of new requirements and features, and the costs associated with design changes based on customer feedback. Suppliers then upgraded their products to bring them in conformance with the introduction of new product releases.
In addition to Sonet standards for terminal multiplexers and add/drop multiplexers, other standards were developed for ring-network topologies that provided increased reliability and survivability. Moreover, Sonet has apparently been accepted as the transport standard of choice with customer premises equipment, digital crossconnect systems, central office switches and new asynchronous transfer mode switches that offer Sonet interfaces.
While Sonet technology was moving from conception to delivery, network service providers were deploying asynchronous fiber-optic transmission systems. To satisfy user demands for increased bandwidth, network providers were ready to deploy the highest capacity possible fiber-optic transmission systems.
In the absence of industry standards, though, each supplier developed its own optical rate, format and multiplexing scheme. This methodology led to a proliferation of non-standard, proprietary interfaces and non-interoperable fiber-optic transmission systems. For example, to drop a signal at intermediate sites, two back-to-back terminal multiplexers were needed. These products provided redundancy for reliability (for example, 1+1, 1:1 and 1:n protection switching), as well as operations, administration, maintenance and provisioning activities capabilities.
However, this setup prompted some constraints whenever optical signals were handed from one carrier to another (for example, local exchange carrier to interexchange carrier), especially when the two carriers used standardized equipment from different suppliers. Consequently, both carriers had to train their craftspeople on equipment that was not a standard within their companies.
At this time, suppliers, as well as network service providers, knew Sonet was coming. The suppliers wanted to know when they should transition from manufacturing and selling asynchronous products to Sonet-based synchronous products. The users wanted to know when they should stop deploying asynchronous fiber-optic transmission systems and start deploying Sonet products.
As always, business needs determined the answers. When network providers needed increased bandwidth for their customers, they used the best available systems, and suppliers built those systems as long as their customers purchased them. However, suppliers used intelligent strategies to balance the need to move forward to Sonet and still meet their customers` existing needs.
In the early stage of Sonet standards` development, the main topic of discussion was the primary optical transport rate. From two original proposals of 50.688 megabits per second (Bellcore) and 146.432 Mbits/sec (AT&T), a new rate of 49.920 Mbits/sec was established.
In addition, the notion of a virtual tributary was accepted for transporting digital signal, level 1 services. By this time, substantial details had been resolved, and a draft document was ready for voting. This T1X1 draft standard was based on transporting standard digital signals (DS-n) in U.S. telecommunications networks.
During this time, the Consultative Committee on International Telegraph and Telephone (now the International Telecommunication Union) expressed interest in Sonet. Interest in international Sonet also increased in the United States. In 1987, with Bellcore`s guidance, Sonet was proposed to CCITT. However, to map European signal hierarchies, the frame structure had to be changed from 13 rows by 60 columns to 9 rows by 270 columns. Also, bit-interleave multiplexing was changed to byte-interleave multiplexing. After these agreements between T1X1 and CCITT, the Phase I Sonet standard was completed in mid-1988.
A quiet period followed, during which suppliers were developing products based on the Phase I Standard. In 1989, many companies announced their intention to build Sonet equipment. Many regional Bell operating companies started field trials of first-generation Sonet equipment. Some early products did not provide add/drop functionality, but were designed to be used in linear point-to-point applications.
By 1990, many worldwide suppliers had developed Sonet products for line rates ranging from optical carrier, level 1 (51.84 Mbits/sec) to OC-48 (2488.32 Mbits/sec), and also for various types of network elements.
Although the standard is defined for integer multiples of the basic OC-1 line rate, only some of these rates have been used in actual products [for instance, OC-3 (155.52 Mbits/sec), OC-12 (622.08 Mbits/sec) and OC-48]. Because of the lack of OC-9 (466.56-Mbit/sec), OC-18 (933.12-Mbit/sec) and OC-36 (1866.24-Mbit/sec) products, these line rates were deleted from the Sonet standards and from Bellcore TA-253, Issue 8 (replaced by GR-253, Issue 1).
Because the Phase I standard did not address and specify the use of section and line data communications channels, many initial products used these overhead bytes for proprietary features. Some suppliers used these bytes to make their products more robust, and others used them to provide proprietary operations, administration, maintenance and provisioning activities features. One product used the D-1 to D-3 bytes for linear automatic protection switching, as well as the K1 and K2 bytes.
To accommodate evolving standards for Sonet overhead, integrated circuits typically included:
A register map, and memory and proprietary serial control interface for reading/writing the register map.
Dedicated overhead interface channels synchronous with the system transmission clock.
The proprietary serial control interface also configures devices and monitors their performance. The dedicated overhead interface channels may also include supplier-specific access to unassigned Sonet overhead. Some circuit packs typically included an onboard controller driving the device serial interface that derived information both from local memory and communications links with higher-order processors. With this architecture, Sonet overhead may originate in static memory (for example, the C1 Byte), in slowly changing memory (for example, the F1 Byte) or in higher speed dedicated channels (for example, the data communications channel).
Configuration data in the memory map controls device-logic-steering data to the buffers that are sourcing the transmission circuits. It is not always possible to implement a design that enables software upgrade procedures to make the necessary changes for conforming to the changes in standards. Suppliers knew that developing a fully conforming Sonet product while they waited several years for official standards approval was not possible. Changes were expected with growth overhead bytes, new overhead definitions and enhancements to existing standards.
Although backward compatibility for products is desirable, it is not always possible. First-generation equipment implemented rates and formats according to the standard, but did not necessarily implement the timing criteria used to determine entry into various failure states, such as loss of signal, loss of frame and loss of pointer. However, this situation does not affect performance when equipment from the same supplier is used at both ends of the network.
Some equipment did not detect loss of pointer states and, therefore, did not generate an alarm, because the loss-of-pointer detection function was not part of the original standard. In addition, objectives to set all unused bits to zero and to ignore the values contained in unused bytes were unavailable in some initial products. One reason was this equipment was designed to early Sonet standards. These standards suggested that all unused, unassigned and reserved bits be set to logic 1.
For this case, suppliers had to replace circuit cards to make the equipment conform to new requirements. For other cases, suppliers used different algorithms to detect alarms. This was done to reduce the probability of incorrect detection, even though the detection time was made longer.
After the introduction of terminal multiplexers, linear add/drop multiplexers were offered. These add/drop multiplexers were more complex than terminal multiplexers because of their add/drop and pass-thru features, which were based on a time-slot interchange fabric. The add/drop multiplexers provide a key benefit of synchronous transport: They allow easy access to individual channels without demultiplexing. However, these applications required more attention to network synchronization and related timing issues.
As customer expectations increased for improved service reliability, ring-network architectures were established by folding the linear add/drop chain back on itself. Ring architectures allowed the end-to-end survivability desired by many customers.
Then, new requirements were introduced for unidirectional path-switched rings and bidirectional line-switched rings. Products that provided these new functions were introduced. At the same time, suppliers made changes to their initial products to bring them in conformance to the new requirements.
Standards bodies are working to develop requirements for OC-192 (9953.28-Mbit/sec) systems. However, suppliers are expected to introduce commercial OC-192 systems before the completion of standards; they will be able to offer customers the bandwidth they need without having to worry about conformance to all the requirements. Suppliers will upgrade the system to the next level of conformance as standards and technology mature.
The performance of the existing synchronization network was studied extensively and judged insufficient to guarantee jitter performance for the DS-3 signals that transversed "Sonet islands." A Sonet island is a Sonet network that has DS-3 signal inputs and outputs. When such networks are connected with asynchronous signals (that is, a DS-3 rather than a synchronous transport signal level-1 carrying a DS-3), jitter can accumulate as a result of pointer adjustments as one island is mapped through to the output of the next island via the C-bit asynchronous bit-stuffing mechanism.
Two procedures were instituted to address this issue. First, a "filter clock" was defined by Bellcore based on studies performed by T1X1.3; it is known as a stratum 3E clock. Most network providers are upgrading their building-integrated timing-system clocks to stratum 3E as they deploy Sonet. Second, tighter industry requirements for Sonet desynchronizers were written (in T1.105.03) and limit the amount of jitter generation during STS pointer adjustments.
The industry was also concerned about how virtual tributary-1.5 pointer adjustments would affect customer premises equipment that was timed by a DS-1 carried on Sonet. Requirements (supplement to T1.105.03) were added to specifications on DS-1 desynchronizers to control how quickly they played out the pointer adjustment so the customer premises equipment could tolerate it.
Recent discussions have dealt with the requirement that all Sonet network elements support stratum-3 clocks internally. This stipulation has not appeared in the requirements, but some equipment providers have made the decision to provide stratum-3 clocks in their products. Some others only furnish stratum-3 clocks in higher rate systems.
Synchronization messages were recently defined in the Z1 growth byte (now called the S1 sync message byte) and in the bit-oriented messages of the extended superframe data link for DS-1 signals. These messages allow synchronization reconfigurations of Sonet rings. Before the standard was completed, some suppliers provided similar functionality in a proprietary manner. The new products are expected to provide synchronization messaging as defined in the latest standard.
A major Sonet benefit to service providers is interoperability among suppliers` equipment and comprehensive network management capabilities, including automated operations. Three bytes are reserved in the section overhead for a section data communications channel, and 9 bytes (which are currently unspecified) are reserved for a line data communications channel. Phase 1 standards did not include protocols or a management information model for data communications channel use.
Because regional Bell operating companies typically manage their networks using Transaction Language One via an X.25 protocol stack, initial products provided a message-based TL-1 communications path to operations systems. Some products also provided parallel and serial (telemetry byte oriented serial) telemetry interfaces. Typically, the TL-1 message set that was supported by most suppliers was a subset of surveillance messages specified in Bellcore Technical References.u
Sunil G. Joshi is director of network synchronization/Sonet analysis at Bell Communications Research in Morristown, NJ.