Fujitsu Labs develops 56G receiver circuit

June 17, 2014
Fujitsu Laboratories Ltd. says it has developed a receiver circuit capable of receiving electrical signals at 56 Gbps. This is double the data communication speed between CPUs in the current state-of-the-art equipment, and is an important step in the development of the next generation of high-performance servers and supercomputers, the company says.

Fujitsu Laboratories Ltd. says it has developed a receiver circuit capable of receiving electrical signals at 56 Gbps. This is double the data communication speed between CPUs in the current state-of-the-art equipment, and is an important step in the development of the next generation of high-performance servers and supercomputers, the company says.

In recent years, rising data-processing speeds in servers has meant increasing CPU performance, together with boosting the speed of data communications between chips such as CPUs. However, one obstacle to this has been the performance of the circuits that correct degraded waveforms in incoming signals.

Fujitsu Laboratories has used a new "look-ahead" architecture in the circuit that compensates for quality degradation in incoming signals, parallelizing the processing and increasing the operating frequency for the circuit in order to double its speed.

Details of this technology are being presented at the 2014 Symposia on VLSI Technology and Circuits, opening June 9 in Hawaii (VLSI Circuits Presentation 11-2).

Quick decisions matter

For the next generation of high-performance servers, the goal is to double the data communication speeds between CPUs and other chips to 56 Gbps. Meanwhile, the Optical Internetworking Forum (OIF) has been working on the standardization of 56 Gbps for the optical modules used for optical transmission between chassis (see “OIF launches 56-Gbps electrical interface projects”).

One way to speed up the receiver circuit is to improve the processing performance of the decision feedback equalizer (DFE) circuit, which compensates for the degraded input-signal waveform (see figure).The principle behind DFE is to correct the input signal based on the bit-value of the previous bit and to emphasize changes in the input signal, but the actual circuit design works by choosing between two predefined corrected candidates. If the previous bit value was a 0, the correction process would apply a positive correction to the input signal (additive) to emphasize the change from 0 to 1. If the previous bit value was 1, it would apply a negative correction to the input signal (subtractive) to emphasize the change from 1 to 0. If another 0 was received, the positive compensation would increase the signal level, but not to such a level as would create a problem for the 1/0 decision circuit.

Source: Fujitsu Laboratories


Fujitsu Laboratories took a new approach, a "look-ahead" method that pre-calculates the two candidates based on the selection result for the previous bit, and simultaneously decides the value of the previous bit and the current bit after deciding the value of the bit two bits previous. This shortens calculation times, resulting in a receiver circuit that can operate at 56 Gbps.

Multiple look-ahead circuits that apply DFE one bit at a time can also operate independently from each other, making it possible to parallelize these processes. Parallelization is achieved by inserted a hold circuit between the selection circuit and look-ahead circuit, with the input and output of each hold circuit being synchronized.

Because the calculation time for the look-ahead circuit is roughly the same as the selection time for the selector, overall calculation time is dependent on the number of selectors, so in a four-bit system, two selectors are required. This means computations can be safely completed with electronics running at just one quarter of the desired data transmission speed of 56 Gbps. The upshot is that it becomes possible to use 14 Gbps electronics to receive data at 56 Gbps.

This technology makes it possible to increase bandwidth of communications between CPUs in future servers and supercomputers, even if CPU performance doubles, without increasing pin counts, and will contribute to increased performance in large-scale systems where numerous CPUs are interconnected, Fujitsu Labs claims.

In addition, it complies with standards for optical module communications, and compared to the 400-Gbps Ethernet in OIF-CEI-28G optical-module communications, the number of circuits running in parallel (number of lanes) can be halved, allowing for smaller optical modules running on less power, and higher system performance.

Fujitsu Laboratories plans to apply this technology to the interfaces of CPUs and optical modules, with the goal of a practical implementation in fiscal 2016. The company says it is also considering applications to next-generation servers, supercomputers, and other products.

For more information on communications ICs and suppliers, visit the Lightwave Buyer’s Guide.

Sponsored Recommendations

Coherent Routing and Optical Transport – Getting Under the Covers

April 11, 2024
Join us as we delve into the symbiotic relationship between IPoDWDM and cutting-edge optical transport innovations, revolutionizing the landscape of data transmission.

Scaling Moore’s Law and The Role of Integrated Photonics

April 8, 2024
Intel presents its perspective on how photonic integration can enable similar performance scaling as Moore’s Law for package I/O with higher data throughput and lower energy consumption...

Constructing Fiber Networks: The Value of Solutions

March 20, 2024
In designing and provisioning a fiber network, it’s important to think of it as more than a collection of parts. In this webinar, AFL’s Josh Simer will show how a solution mindset...

From 100G to 1.6T: Navigating Timing in the New Era of High-Speed Optical Networks

Feb. 19, 2024
Discover the dynamic landscape of hyperscale data centers as they embrace accelerated AI/ML growth, propelling a transition from 100G to 400G and even 800G optical connectivity...