Cutting in line with low latency

Jan. 19, 2011
Jason Smith, BTI Systems -- The impact of low latency for financial houses is now well known. But low latency can improve the bottom line for other businesses as well.

Low-latency networking has become a major area of focus recently as service providers seek an optimized strategy to meet metro and regional extension requirements and address latency-intolerant applications within the financial market, among others.

Algorithmic trading in financial markets has been the major driver for low-latency optical networking. However, new applications that could benefit from low latency networks are starting to emerge. (iStock)

Electronic trading leads the demand for low-latency connectivity because time is the ultimate factor in success. Electronic communications network (ECN) connectivity between exchanges, hosting centers, and high-frequency trading facilities delivers raw exchange feeds and transaction information to enable traders to grasp opportunities only available for fractions of a second. The classic saying “time is money” holds particular truth in today’s economy, and low latency has become a key differentiator in boosting network infrastructure and building a better bottom line.

When it comes to paying the premium for a low latency service or investing in their own low latency network, companies must ask themselves just how much might really be at stake. A study by Forex mt4 Brokers points to an experiment Commercial Network Services conducted that put this question to the test.

On April 26, 2009, the company restarted demo versions of its expert advisor (EA) automated trading systems with $25,000 of funds in each account to illustrate the importance of low latency to the online trader. In the most dramatic outcome, its New York City-based EA platform enjoyed a 284% larger profit compared to an identical platform in its West Coast data center, although the same New York-based broker controlled both. The EA managed to earn a $870 profit on a $25,000 account after nine days from the US West Coast data center, while the EA situated in the New York City data center earned a $2,475 profit during the same time period – nearly 3X more profit.

The profit variance was due to the proximity of the New York data center relative to the transcontinental latency encountered in communicating with the one on the West Coast. When it comes to latency, the shortest path between two points, say between Chicago and New York, would provide competitive differentiation in the same manner. If there are two network connection options between facilities and one is 10 ms faster—based on the lack of intermediary equipment and an optimized fiber route distance—the information traversing the shorter path will arrive more quickly and will be more useful in latency intolerant applications like market feeds.

Beyond financial applications

The value of low latency goes beyond the financial vertical into other applications and areas within telecommunications. Advancements in optimized WAN transmission modules and platforms are already starting to benefit these applications across a broad range of network operations, including:

  • Cloud services: The prevalence of cloud services has gone well beyond webmail. The cloud is becoming a key aspect of dynamic business operations. On-demand, interactive Web-based applications require low-latency networks to deliver a high quality of experience -- a necessity for the cloud service provider to ensure usability and eliminate poor service quality complaints.
  • Consolidated data centers: Many enterprise networks have moved to a consolidated data center strategy to benefit from economies of scale for processing power, storage, and IT staffing. Information and data used across the enterprise needs to be networked (across the metro, region, and nation) and shared with a “LAN-like feel” now that they are physically consolidated in fewer data centers. Low-latency equipment enables efficient information distribution and reliable support for mission-critical applications.
  • Remote business continuity and disaster recovery: Disk mirroring and synchronous replication of data between primary and secondary data centers with a requirement of (near) zero data loss, or a recovery point objective (RPO) near 0, are very latency intolerant. Every bit of extra distance between the primary and backup database is important, especially for regional network connections that move data outside the power grid, weather pattern, or flood zone.
Cloud-based services are just one of several applications that could benefit from new low-latency optical networks. (iStock)

This intensifying need for low latency across multiple markets and technologies hasn’t developed by mere coincidence. People are tired of waiting, and customers will do whatever they can to avoid it.

For example, at The O’Reilly Velocity 2009 Conference, representatives from Google Search and Microsoft's Bing teams, Jake Brutlag and Eric Schurman respectively, co-presented results from their own independent latency experiments conducted on each site to illustrate the effects of latency on user activity. Bing found that a 2-sec slowdown reduced search queries by 1.8% and revenue per user by 4.3%. Google Search found that a 400-msec delay resulted in 0.59% fewer searches per user, a pretty significant influence for a delay that lasted less than a half of a second. Google Search also found that users performed 0.21% fewer searches even after the delay was removed, indicating a distinct relationship between slow user experience and longer-term consequences.

At the same conference, Shopzilla Vice President of Engineering Phil Dixon provided even more dramatic insight into the impact of latency on the bottom line. Shopzilla launched a year-long performance redesign that produced a 5-sec latency improvement (from ~7 sec to ~2 sec). The result was an astounding 25% increase in page views and a 7-12% increase in revenue. These findings further illustrate the positive effect low latency can have for boosting revenue numbers.

Winning the low-latency race

These examples illustrate the evolution of low-latency connectivity beyond linking the major global financial centers to providing a crucial backbone for general connectivity practices. Cloud services, such as online gaming and shopping, are prime examples.

The research from Google, Bing, and Shopzilla demonstrate that the key to connectivity is a combination of low-latency interconnection and proximity to the user. A network with multiple hubs really close to customers, along with low-latency connections between them, is the best model. Facilities in major cities like Seattle, Denver, Chicago, and Miami rather than a single facility that addresses all of North America can transition service-level agreements (SLAs) from 30-msec latency for the continent to less than 10 msec for the majority of cases.

With so much money on the line, speed and reliability are essential. Numerous companies therefore are making note of the measurable advantages associated with reducing latency. For traders and financial firms especially, ridding their services and private networks of latency is a primary focus; even equipment latency—measured in microseconds—is now evaluated to see just how low connection latency can go. Latency reduction is becoming an all-out race as new businesses and financial firms look to get an edge on the competition.

With so many developments in low-latency network infrastructure, who knows what else might be in store for the technology and telecommunications industries. In today’s economy, consumers need ever faster access to information because they can’t afford to wait and deal with lags or errors in their system. The demonstrated success that comes with reducing latency seems to prove that patience isn’t a virtue in the marketplace today, not when organizations have so much access to technology optimized for network efficiency.

The current push for ultra-low latency certainly seems to indicate that the trend towards nearly instantaneous information flows is here to stay as well. Networks will continue to evolve in time, but speed will always be a determinant factor. As network architects of all types begin to examine latency moving forward, the mantra seems to be: Go low or get left in the dust.

Jason Smithis Technical Solutions Marketing Manager for BTI Systems. He has more than 10 years of experience in telecommunications in roles including network planning, enterprise network consulting, managed services business development, and product marketing.


Sponsored Recommendations

Data Center Network Advances

April 2, 2024
Lightwave’s latest on-topic eBook, which AFL and Henkel sponsor, will address advances in data center technology. The eBook looks at various topics, ranging...

Constructing Fiber Networks: The Value of Solutions

March 20, 2024
In designing and provisioning a fiber network, it’s important to think of it as more than a collection of parts. In this webinar, AFL’s Josh Simer will show how a solution mindset...

Coherent Routing and Optical Transport – Getting Under the Covers

April 11, 2024
Join us as we delve into the symbiotic relationship between IPoDWDM and cutting-edge optical transport innovations, revolutionizing the landscape of data transmission.

Scaling Moore’s Law and The Role of Integrated Photonics

April 8, 2024
Intel presents its perspective on how photonic integration can enable similar performance scaling as Moore’s Law for package I/O with higher data throughput and lower energy consumption...