http://localhost:4503/content/lw/en/blogs/lightwave-guest-blog.html2016-05-20T03:53:06.118ZThe Lightwave Guest Blog ArchivesRead insightful opinions & analysis of optical communications & networks.Adobe Experience ManagerResilient network architectures save more than just datanoemail@noemail.orgJim Theodoras, ADVA Optical Networking<p><img style="float: left; vertical-align: top; margin: 5px;" src="/content/dam/lw/site-images/Jim%2BTheodoras%2Bcropped.jpg" alt="Jim Theodoras, ADVA Opticial Networking">By now, most in our industry are familiar with how <a adhocenable="false" href="">data centers</a> are designed. Both mass and industry media have extensively covered the lengths engineers go to in order to guarantee uninterruptible service. Given the value of the information they hold, it's no surprise that data centers are designed with such maniacal attention to detail and redundancy.<br> <br> Look at power: They have redundant power feeds from redundant local power plants. They have redundant generators in case the redundant power fails. They have massive redundant battery packs to hold up power just in case the redundant generators take too long to start. They even heat the generators' diesel engine combustion chambers 24/7 to guarantee they start at first crank.<br> <br> Look at security: There are huge earthen mounds surrounding the data centers to act as blast shields, concrete barriers inside those, barbed wire fences inside those, and guard gates inside those. And retina and palm scanners are de rigueur.<br> <br> Look at their network cabling: Everything is redundant, at least two to four times over, and cross-cabled.<br> <br> Look at personnel: A staff is kept onsite 24/7, sitting and waiting for a failure to happen. A major theme in grand ole telecommunication was five-nines reliability, meaning 99.999% up time. That translates to less than 5.26 minutes down time per year. And today's data centers meet this mark easily.<br> <br> In an effort to take this resiliency to the next level, as well as scale fast enough to keep up with demand, data centers have migrated to leaf-and-spine architectures that inherently create duplication and redundancy. As they've grown from regional data centers to global networks, they've extended their leaf-and-spine connections to include geographically dispersed data centers. Software like Google's TrueTime and Microsoft's Replication Table then keep multiple copies of data stored across the globe in sync. These new architectures have been so successful, data centers can now absorb a 1-2% failure rate. In fact, an entire data center can go down without too much fuss.<br> <br> But a funny thing's happened in the quest for more nines and greater resilience – localized resiliency has actually started to become less important.<br> <br> While it's well known that these new techniques save data and protect it from loss, less understood is just how much it saves elsewhere.<br> <br> Staffing: If a data center can withstand 1-2% failure rate, it's now enough to have a smaller team working normal hours, no overtime needed. If a server or switch goes down on Saturday, just fix it Monday morning.<br> <br> Security: While the data is still very valuable and privacy must be protected, perhaps just a fence and guard shack will suffice.<br> <br> Power: Instead of massive banks of batteries that maintain power for the entire data center for an hour, maybe onboard battery or capacitive solutions that hold up individual servers for five minutes will suffice. Instead of massive rows of diesel generators, maybe it's enough to just wait for the power company to return power.<br> <br> In fact, some data center operators are running banks of servers in uncontrolled, unsecured environments as an extreme test of just how far they can go.<br> <br> Is it perhaps time for the optical communications industry to do the same -- namely, test just how far it can go? Yes, a few data center operators have been very vocal about loosening up electro-optical specifications, and a few component vendors have started to respond, but any significant cost benefit has yet to be realized. Opening up a spec here and there isn't going to cut it.<br> <br> The resiliency that these new network architectures provide requires an entirely new mindset. When we hear &quot;mega data center,&quot; we think something as big as a sports stadium, as secure as a prison, with as many generators as the Capitol. The data center of the future might be closer to an unstaffed building in the warehouse district with a single guard and badge reader at the door, or perhaps a floor in a nearby office tower. Similarly, the optical links inside might be consumer-grade optics, with overclocked lasers, marginal link budgets, and uncleansed ferrules, running bit error rates that would make us cringe in horror today.<br> <br> In hardware engineering circles, there's an old joke when a physical mistake is found: &quot;We'll just fix it in software.&quot; Maybe the time has come to depend more upon software and architectural resiliency and really let loose the optical links.<br> <br> <b>Jim Theodoras</b> is senior director of technical marketing at <a adhocenable="false" href="" target="_blank">ADVA Optical Networking</a>.<br> <br> </p> http://localhost:4503/content/lw/en/blogs/lightwave-guest-blog/2015/10/resilient-network-architectures-save-more-than-just-data.html2015-10-27T20:04:00.000Z2015-10-27T20:21:10.168ZWhy there is no room for ego in the new-age networking ecosystemnoemail@noemail.orgChris Janz, Ciena Corp.<p><img width="133" height="191" src="/content/dam/lw/online-articles/2015/January/LWjanz012315.jpg" style="float: left; vertical-align: top; margin: 5px;">The term &quot;ecosystem&quot; seems pretty well ensconced in the IT industry, and in networking too of late. The idea of a system of vendors, developers, and operators interacting and collaborating to mutual benefit is one that seems like it ought to have come to fruition a long time ago. Given where we are heading now, it is hard to fathom why it took so long.<br> <br> But the ICT industry has to date consisted largely of &quot;ego-systems&quot;: in effect, a series of siloes built around individual vendors, or limited consortia of vendors, working to create closed or proprietary solutions that were highly defensible. This was largely the legacy of history and the implementation of networking systems principally in hardware based on embedded, fixed protocols. This approach meant that vendors could work on evolutionary technologies or ideas behind closed doors, with &quot;flavors&quot; of implementation brought out that ultimately constrained operator options, innovation, and differentiation, and adversely affected their economics.<br> <br> But the industry as a whole has learned from the past, and the general shift of networking toward software – represented by the acronyms on everyone's lips: SDN (<a adhocenable="false" href="">software-defined networking</a>) and NFV (network functions virtualization) – is fundamentally changing the game. The trend toward software-powered networking is driving a reshaping of the industry landscape that will break the dominance of ego-systems and replace them with something much better: a true ecosystem encompassing the industry as a whole.<br> <br> <br> </p> <table cellspacing="0" cellpadding="1" border="0"> <tbody><tr><td><img src="/content/dam/lw/online-articles/2015/January/LWcienablog012315.jpg"></td> </tr><tr><td><b>The networking industry &quot;shift to software&quot; is driving powerful changes in the industry landscape, promoting increasing collaboration through standards and coding efforts, and setting the foundations for adoption of open framework systems that break industry “ego-systems” and drive an expansive industry ecosystem.</b><br> </td> </tr></tbody></table> <p><br> SDN and NFV reduce the scope of functional heterogeneity in hardware, pushing much more of how networks function into software. No longer so strictly tied to vendor &quot;iron,&quot; the industry can approach software much differently. Part and parcel of the trend is a rationalization and convergence of evolving software systems. Together, these forces both compel and permit the formation of open software systems and architectures in which all industry players are free to participate – now, all pulling in the same direction.<br> <br> One impact of this that we are seeing is on networking standards organizations and how they function, and the emergence of networking-oriented open source software consortia.<br> <br> While SDN and NFV have driven the emergence of new industry organizations and efforts, like the Open Networking Foundation, the ETSI NFV ISG, and OPNFV, they have also affected the orientation and way of working of existing organizations like the Metro Ethernet Forum and TM Forum. Both &quot;old&quot; and new organizations are finding common ground, aligning efforts, and looking for complementarities of particular focus in ways we have not seen before, as they all adjust to the &quot;shift to software.&quot;<br> <br> In parallel, we have seen the emergence of open source networking software consortia and efforts, like OpenDaylight and recently <a adhocenable="false" href="">ONOS from ON.Lab</a>. These efforts work differently than standards efforts; they progress by direct action – the generation of software code. But they are not working at cross-purposes to their counterparts. On the contrary: the industry standards and &quot;quasi-standards&quot; organizations are collectively providing the backdrop of agreement on principles of software function and architecture – e.g., information models, interfaces, etc. – that can guide, stabilize, and accelerate the open source development efforts.<br> <br> One point on which all of these organizations and efforts are increasingly clear and in agreement: Openness is key, and expansive openness leads to an expansive ecosystem. The shift of networking function more toward software implies significant changes from the bottom to the very top of system implementations within operators' infrastructures and back offices. Progressing such an expansive evolution effectively requires the participation of the industry as a whole – which means open collaboration. The convergence of software systems and facilitation of automation through IT systems require the same thing. Finally, broad innovation – and the ability by operators to leverage the whole of industry innovation and to innovate and differentiate their businesses – requires the development of and general adherence to common open system frameworks that are – logically – fundamentally based on open source efforts.<br> <br> All this is exactly what we see happening. And as the trend develops, we'll move from &quot;forming and storming,&quot; through building the foundations, to reaping the rewards through technology adoption and deployment, and leveraging the power of an expansive – industry-scale – ecosystem of innovation. It will be a whole new ballgame.<br> <br> <b>Chris Janz</b> is Agility CTO at <a adhocenable="false" href="" target="_blank">Ciena Corp.</a> Ciena Agility is a new business division within Ciena that delivers turnkey solutions that enable service providers to offer virtualized network functions as an on-demand, consumption-based business service. Agility encompasses Ciena's SDN Multilayer WAN Controller and its applications, V-WAN, the Agility Matrix solution, including VNF Market and Director, as well as all future SDN and NFV development.<br> <br> </p> http://localhost:4503/content/lw/en/blogs/lightwave-guest-blog/2015/01/why-there-is-no-room-for-ego-in-the-new-age-networking-ecosystem.html2015-01-23T20:30:00.000Z2015-01-23T22:20:51.468ZForget about the labelsnoemail@noemail.orgWard Williams, ProLabs<p><img style="float: left; vertical-align: top; margin: 5px;" src="/content/dam/lw/online-articles/2014/11/LWwardwilliams.png">In the luxury and fashion world, we could argue that brand names and labels are important. But in our world, the optical network world, donning a brand-named <a adhocenable="false" href="/content/lw/en/equipment-design/components.html">fiber-optic component</a> is not mission critical. Purchasing behavior is—as it should be—based on quality, price, and service.<br> <br> A ProLabs survey, which was conducted at this year’s European Conference on Optical Communications (ECOC) in France with more than 120 respondents, found that quality, price, and service were ranked as the top three most important factors when purchasing fiber-optic components.<br> <br> In fact, more than half of the fiber-optic communication professionals surveyed ranked quality alone as their number one priority. Overall, 98% of the respondents included quality in their top three, 89% of respondents included price, and 53% included service as their top three priorities when purchasing fiber-optic components. What’s more, only 14% of respondents even considered brand names to be a top-three priority or even a concern.<br> <br> This notion of placing quality over brand names is a result of the current market. Many in the industry are faced with greater competition increasingly eating into their margins. More than 61% of the survey respondents placed this concern as the primary factor keeping them awake at night since this is a major challenge to keep their businesses running.<br> <br> The significant change in attitude from industry professionals is changing the market landscape. First, service providers and data center operators are more discerning and not scared of giving alternative suppliers a chance if the price and compatibility is right. For these companies, many of which were represented at ECOC, reputation must be earned through a deep-rooted commitment to product quality, a flawless infrastructure and customer support, as well as a relentless approach to continuous improvement.<br> <br> Second, the market welcomes more players, granted they are able to deliver reliable products at a competitive price. In the end, the market will become more transparent, open, and dynamic—characteristics that will help reduce costs and drive wider adoption of optics as interconnect technology. With lower costs, new markets will emerge and the whole industry will reap the benefits. For example, Facebook's Open Compute Project (OCP) has emerged and disrupted the data center market, which is maximizing innovation by encouraging the sharing of ideas. With OCP, we’re given the power of choice with more options available and not stuck with purchasing from traditional legacy manufacturers.<br> <br> This attitude change has revealed a trend we have been seeing in recent years where the optical products market is maturing and customers are increasingly open minded about where they source their parts from. Buyers are becoming increasingly pragmatic and are choosing real high quality over a label.<br> <br> Luckily for all, the time for overpriced devices is coming to an end. As customers become increasingly confident and better informed, they are more and more willing to make savvy choices.<br> <br> <b>Ward Williams</b> is chief commercial officer at <a adhocenable="false" href="" target="_blank">ProLabs</a>, an independent provider of global optical network infrastructure products. Ward was previously vice president of global sales and marketing for the datacom business unit of TE Connectivity. Ward’s key responsibilities include managing ProLabs' U.S. business, including key customer relationships.<br> <br> </p> http://localhost:4503/content/lw/en/blogs/lightwave-guest-blog/2014/11/forget-about-the-labels.html2014-11-03T21:04:00.000Z2014-11-03T22:13:00.654ZLife after 100Gnoemail@noemail.orgEarl Kennedy, Alcatel-Lucent<p><img style="float: left; vertical-align: top; margin: 5px;" src="/content/dam/lw/site-images/earl-kennedy-100.jpg">It’s hard to believe we’re talking about it already: The end of the 100G era. But skyrocketing bandwidth demand means many operators are already pondering what’s next. With 200G optical hitting the market, you probably have questions about when to make the move to this next phase – and what you’ll need to know when you get there.<br> <br> <b>The move to metro creates demand</b><br> If you are like most optical network operators, you are looking to move content closer to your end users. Dramatically shifting traffic from the optical core to the metro network is a good option to meet the relentless bandwidth demand growth highlighted in the figure below and relieve some of the pressure on your network. However, while this approach is practical, it will have consequences.</p> <p><img src="/content/dam/lw/online-articles/2014/09/LWalublogfig090514.jpg"><br> <br> For example, suddenly the question isn’t whether the market will go beyond 100G, but how soon. The 100G networks that were practically unimaginable four years ago already appear to have a finite shelf life.<br> <br> <b>Beyond 100G: What’s next</b><br> Now that 200G is commercially available and deployed by some operators, it’s a viable option to consider to support your metro evolution strategy. It gives you an immediate doubling in capacity, so it can accommodate demand when and as you need it. But beyond the cost-efficiency necessary to support return on your investment, what are some of the other key ingredients that a 200G deployment requires? Think “SAS”: Scalable, agile and SDN-ready.<br> <br> <b>Scalable:</b> Metro networks are <a adhocenable="false" href="">forecasted to grow 560% in total traffic</a> by the end of 2017. To meet that demand, any network technology deployed today will have to be scalable. In fact, the ability of a network to scale and aggregate wavelengths from 10G to 100G to 200G and beyond should be considered fundamental.<br> <br> Networks must meet the demand for dynamic services economically at potentially terabit scales and at multiple layers to deliver a broad set of services at the most economical layer. A flexible “metro core” architecture that supports network convergence with minimal impact to service operations and organizations therefore is a vital part of moving scalable metro networks forward. The right scalable approach also will provide investment protection with the ability to double network capacity when you need it without incurring the upfront cost of buying twice the capacity you require today.<br> <br> <b>Agile:</b> Like most network operators, you’re likely looking to optimize IP and optical networking equipment to reduce layers and complexity. The move to 200G and beyond offers you the opportunity to collapse multiple networks into a single dynamic and reconfigurable multiservice, multilayer infrastructure that is efficient and agile. This will allow you to support rapid delivery of high-performance, on-demand, application-driven network services.<br> <br> The optimal network approach supports multi-technology, multiservice architectures that serve as a single platform for applications such as business wholesale, mobile backhaul, IPTV, datacenter connectivity, and enterprise vertical applications.<br> <br> <b>SDN-ready:</b> And of course, any new architecture will need to address the unpredictable and dynamic traffic demands with software-defined networking (SDN). This requirement becomes more acute as the number and complexity of services continue to grow. An optical network that is software configurable simplifies operations, increases service velocity, and automates provisioning. SDN and a control plane automate the process of activating optical services.<br> <br> SDN offers the promise of greater network agility and efficiency through multilayer resource discovery and control as well as dynamic path selection. Based on policy-driven provisioning, SDN simplifies and automates service creation, resulting in swift service innovation and delivery. Software-configurable platforms lay the framework needed to implement SDN in the future.<br> <br> And let’s remember, a software-configurable 100G/200G platform that enables a doubling of capacity with the touch of a button results in faster time to revenue.<br> <br> <b>Adding it all up</b><br> A network that is scalable, agile, and programmable is critical to minimizing capex and opex. Scalable technology that prevents costly overbuilds and recurring investments in space and power will be paramount going forward. Agile optical networks can meet demand for dynamic services economically; programmability drives higher network utilization without sacrificing network or service reliability.<br> <br> To keep up with surging broadband traffic volumes, service providers in virtually every market are moving their optical transmission networks to 40G and 100G. But how long will this be enough? Moving to 200G can protect operator networks from the prospect of premature fiber exhaustion. And they can avoid investment in costly photonic overlays.<br> <br> <b>Earl Kennedy</b> is senior product marketing manager, <a adhocenable="false" href="" target="_blank">Alcatel-Lucent</a>, IP Transport.<br> <br> </p> http://localhost:4503/content/lw/en/blogs/lightwave-guest-blog/2014/09/life-after-100g.html2014-09-05T19:30:00.000Z2014-09-05T18:50:25.408ZOvercoming the alphabet soup of form factors with 100G QSFP28noemail@noemail.orgArlon Martin, Mellanox Technologies<p>When the IEEE finished the first 100G standard for Ethernet networks, the transceiver industry launched an alphabet soup of form factors. The <a href="" adhocenable="false">CFP</a> emerged first, &quot;C&quot; for 100, and FP for &quot;Form factor, Pluggable.&quot; Like the early versions of 10G transceivers, the CFP was huge. When compared to the most popular 40G form factor, the <a href="" adhocenable="false">QSFP</a>, front-panel density decreased by a factor of three. Most CFP implementations doubled the power consumption per bit. And, if those two disadvantages were not enough, the price per bit increased by a factor of ten.<br> <br> The next version of form factors, the CFP2, CFP4, and the <a href="" adhocenable="false">CPAK</a>, improved upon the CFP. But when compared to the popular 10G SFP+ and 40G QSFP+, none of these new members of the CFP family improved density, power consumption, or cost.<br> <br> Enter the 100G QSFP28. The QSFP28 is the exact same footprint as the 40G QSFP+. The &quot;Q&quot; is for &quot;Quad&quot;; just as the 40G QSFP+ is implemented using four 10-Gbps lanes, the 100G QSFP28 is implemented with four 25-Gbps lanes. In all QSFP versions, both the electrical lanes and the optical lanes operate at the same speed, eliminating the costly gearbox found in CFP, CFP2, and the CPAK. The QSFP28 module has an upgraded electrical interface to support signaling up to 28 Gbps signals, yet keeps all of the physical dimensions of its predecessor.<br> <br> The 100G QSFP28 makes it as easy to deploy 100G networks as 10G networks. When compared to any of the other alternatives, 100G QSFP28 increases density and decreases power and price per bit. It is fast becoming the universal data center form factor. Here are some of the reasons.<br> <br> The QSFP28 increases front-panel density by 250% over QSFP+. The form factor is the same and the maximum number of ports is the same, but the lane speeds are increased from 10 Gbps to 25 Gbps. The increase in panel density is even more dramatic when compared to some of the other 100-Gbps form factors: 450% versus the CFP2 and 360% versus the CPAK.<br> <br> Like the QSFP+, the same form factor supports both cables and transceivers. In the first generation of 100G switches and routers, the smaller CXP form factor was used for cabling and the CFP or CFP2 was used for transceivers. This forced the equipment designer to make huge sacrifices. A switch with CXP ports could not be used in any data center with singlemode fiber; a router with CFP2 or CPAK ports was limited in bandwidth by the 8-10 ports that could fit on the front panel.<br> <br> The QSFP28 resolves this problem. A 1 rack-unit (RU) switch can accommodate up to 36 QSFP ports on the front face plate. Many varieties of either transceivers or cables can plug into these ports. The cables can be either direct-attach copper cables, commonly referred to as DACs, or active optical cables (AOCs). DACs offer the lowest cost but are limited in reach to perhaps 3 m. They are typically used within the racks of the data center, or as chassis-to-chassis interconnect in large switch and routers. AOCs are much lighter and offer longer reaches up to 100 m and more. Customers like AOCs because they are much cheaper than optical transceivers.<br> <br> QSFP28 transceivers can be based on either <a href="" adhocenable="false">VCSELs</a> (useful for shorter distances on multimode fiber) or <a href="" adhocenable="false">silicon photonics</a> (for longer distances on singlemode fiber). The advent of silicon photonics enables QSFP28 transceivers to support any data center reach up to 2 km or more. Silicon photonics provides a high degree of integration; the CMOS chips are small enough to fit within a QSFP package. Silicon photonics is low-power; even WDM designs can fit within the 3.5 W maximum of QSFP.<br> <br> With all of the technology choices available in the same form factor, the coming generations of high-bandwidth switches, routers, and adapters will all feature QSFP28 ports, ensuring data centers can scale to 100G networks with the simplicity as 10G networks.<br> <b><br> Arlon Martin</b> is senior director of marketing at <a target="_blank" href="" adhocenable="false">Mellanox Technologies Ltd.</a><br> <br> </p> http://localhost:4503/content/lw/en/blogs/lightwave-guest-blog/2014/08/overcoming-alphabet-soup-form-factors-100g-qsfp28.html2014-08-06T21:00:00.000Z2014-08-07T03:34:33.853ZOptical networks and the era of Big Datanoemail@noemail.orgJim Theodoras, ADVA Optical Networking<p><img style="float: left; vertical-align: top; margin: 5px;" src="/content/dam/lw/print-articles/Volume%2030/Issue%202/1303LW_JimTheodoras.jpg" alt="Jim Theodoras, ADVA Optical Networking">With the coming of the era of “Big Data,” we are faced with the latest in a long line of buzzwords. Big Data refers to the mining of huge data sets to gather new insights and trends that have never before been identifiable by other means.<br> <br> In the past, scientists did deep dives into dusty records in musty basements, attempting to prove or disprove a thesis that may or may not have led to fame. Today scientists can sift through vast amounts of data in real time and identify trends as they are occurring. An example frequently sited is Google’s flu map that is able to identify flu levels across a country before the cases are even reported, simply by looking at people’s Internet search patterns.<br> <br> At first glance, Big Data might seem to have little to nothing to do with optical communications. Yet, let’s look at one of the more recent analogs, the “cloud.” The cloud referred to the moving of services from local resources to hosted resources that could reside physically anywhere on the globe. When the cloud was mentioned in the same breath as optical communications, it seemed somewhat of a reach. Yet, fast forward to today and the growth in the cloud is arguably the biggest driver in continued growth in optical. I would go as far as saying it has become optical’s most recent savior, for just when it seemed our industry was doomed to follow the slow and steady growth of telecommunication network upgrades, along came the cloud.<br> <br> Now it’s all about Big Data. So what does that have to do with the plumbing of the network? It turns out, more than one would think.<br> <br> As databases have outgrown the confines of their data centers, they have become truly global in nature. No longer is data hosted locally and simply backed up overnight. Data and computations on that data are now constantly being replicated and load balanced across global networks of data centers. Virtual machines are moved as needed in real time across huge geographical distances. In this context, I would argue that the traditional WAN has become somewhat a misnomer, as wide area networks are no longer relegated to merely areas and may be as wide as the globe.<br> <br> The cloud stores everyone’s cold, hard data like a big hard drive in the sky. And now, Big Data will store all the warm and fuzzy relationships between those data sets, a kind of social media for bits and bytes.<br> <br> Transport networks for big data transport have some unique needs versus their predecessors. They must be efficient, as space, power, and money are forever in short supply when storing all of mankind’s knowledgebase. Scalability also is important, as some content providers have technology replacement/upgrade cycles as short as 3 years. Big Data is also big money, and given the value inherent in the data itself, all data must be secured as it is shuttled from site to site.<br> <br> It turns out Big Data needs big networks. All of which bodes well for our optical industry.<br> <br> <b>Jim Theodoras</b> is senior director of technical marketing at <a adhocenable="false" href="" target="_blank">ADVA Optical Networking</a><br> <br> <br> </p> http://localhost:4503/content/lw/en/blogs/lightwave-guest-blog/2014/06/optical-networks-and-the-era-of-big-data.html2014-06-05T21:00:00.000Z2014-07-02T22:54:04.799ZThe security of networks and the role optical can play in itnoemail@noemail.orgJim Theodoras, ADVA Optical Networking<p><img alt="Jim Theodoras, ADVA Optical Networking" src="/content/dam/lw/site-images/Jim%2BTheodoras%2Bcropped.jpg" style="float: left; vertical-align: top; margin: 5px;">Fiber optics has traditionally been viewed as a more secure way to transmit information than other alternatives. Copper wire can be tapped or monitored for electromagnetic emissions, and wireless can be intercepted rather easily. So it was somewhat surprising when recent revelations in the press revealed wide-scale tapping of fiber-optic trunk lines between data centers. It turns out tapping fiber is much easier than one would think.<br> <br> While transportinga network’s information is predominately handled by fiber, securing the network has traditionally been the job of higher application layers. The most common method is IPsec, which forms the foundation of the Internet economy. When configured to AES-256, it would take an almost infinite time to try out every possible code within its key space. However, the trick is you do not have to try every key, just every potential password, and today a password mining rig costing &lt;$10,000 can try every 8-digit password in under 3 hours. That is why when you enter a new password into a web tool today, it grades your password’s strength.<br> <br> With the revelations of breaches everywhere, content providers and data center operators responded predictably – they added even more higher layer encryption. While on the surface this seems reasonable, in matters of security, one cannot necessarily trust one’s self or gut feelings. Consider what I call “mathematical sleight of hand.” When faced with a very big number, one larger than our brains can comprehend, we think solving must be impossible. The opposite happens in the lottery, where astronomical odds are presented as a simple matter of picking three or four numbers in order to appear easy to win. If AES-256 did not secure your data, odds are the next gee-whiz algorithm will not either.<br> <br> So, where can optical make a play?<br> <br> One way of improving transport security is intrusion detection. A good analogy would be home security systems. Rather than only relying on fancy locks on all doors and windows, motion and glass breakage sensors are used to detect intruders. Similarly, channel monitors already in use today can be used to detect power fluctuations. OTDRs that are currently sold by test equipment vendors can identify discontinuities in the fiber and reflections caused by taps.<br> <br> Another way optical can help is by introducing encryption at lower layers in the network stack, not higher. <a adhocenable="false" href="">Optical transport</a> equipment typically has access to everything below the MAC layer, including PMA sublayers, PCS, and PHYs. An additional level of encryption can be added here; the lower the layer, the higher the throughput. And, by using bulk transport encryption, the full header and checksum can be included inside the encrypted container. Including the checksum prevents manipulation of the data, something standard payload-only encryption cannot.<br> <br> While still deemed somewhat esoteric, Quantum Key Distribution (QKD) is gradually making its way into real network use. QKD offers many benefits that are too tempting to be ignored:<br> </p> <ul> <li style="margin-left: 20px;">Since the key generation and transmission are based upon quantum mechanics, any act of measurement disturbs the system, thus providing built-in intrusion detection. In fact, QKD assumes there is always an intruder listening in!<p></p> </li> <li style="margin-left: 20px;">The keys cannot be copied without degrading them. If enough information is copied from the key to be useful, then not enough information&nbsp; remains in the original key to be viable. In other words, the key is copy proof.<p></p> </li> <li style="margin-left: 20px;">The rate of random bit generation is fast enough (over 1 Mbps) to allow continuously rolling truly random keys, rather than fixed pseudorandom keys with an expiration date.<p></p> </li> <li style="margin-left: 20px;">And, perhaps best of all, the keys generated have high entropy – they are truly random and resistant to the types of password mining machines that are common today. When coupled with QKD, standard AES-256 becomes more than sufficient to guarantee confidentiality.<br> <p></p> </li> </ul> <p>In short, all the optical technologies needed to contribute to network security are already available today. All that is left to do is the repurposing. The answer to improving network security is not fancier keys, but rather optics.<br> <br> <b>Jim Theodoras</b> is senior director of technical marketing at <a adhocenable="false" href="" target="_blank">ADVA Optical Networking</a><br> <br> </p> http://localhost:4503/content/lw/en/blogs/lightwave-guest-blog/2014/04/the-security-of-networks-and-the-role-optical-can-play-in-it.html2014-04-02T20:04:00.000Z2014-04-02T19:45:53.613ZHot for 2014: Virtualization in the optical transport networknoemail@noemail.orgBrandon Collings<p><img style="float: left; vertical-align: top; margin: 5px;" src="/content/dam/lw/online-articles/2014/02/BCollings_cropped.jpg" alt="Brandon Collings, CTO, CCOP, JDSU">In data centers, network function virtualization is in full swing as firewalls, load balancers, and routers are increasingly software-implemented on diverse, cloud-enabled hardware elements. This trend has dramatically increased the value data center operators extract from their investments.<br> <br> Meanwhile, metro and long-haul optical transport networks are being built with next-generation <a href="/content/lw/en/network-design/dwdm-roadm.html" adhocenable="false">ROADM</a> features that promise substantial gains in capacity, flexibility, and operational efficiency. In 2014, as with virtualization within data centers, control-plane-enabled virtualization of the optical network will simplify life for network operators considerably.<br> <br> The key difference between control-plane virtualization in transport networks and data center <a href="" adhocenable="false">software-defined networking</a> (SDN) is in what is actually getting virtualized. SDN is typically thought of in terms of taking network functions away from standalone, discreet hardware platforms and instead managing these elements as virtual machines. Control-plane virtualization in transport networks will generalize and simplify network functions and actions: masking off physical-plane details and automating planning, configuration, management, optimization, and healing. The human operator and planning processes are what will be virtualized.<br> <br> This increased automation and flexibility will let operators unload work off of upper layers and put it on lower levels, including the photonic level. For example, today, in a non-automated network, protection against node failure is handled by costly multiple redundant systems. Automated networks relax the need for expensive, extensive redundancy by automatically re-routing around network faults and restoring traffic.<br> <br> Virtualization will enable the rapid deployment of new services across the network. Operators will simply instruct the management system with the needed parameters of the new service—at the service level. The control plane will then, in an optimal way, determine the underlying physical requirements needed to support the service. A simple request to the control plane will replace what was a highly-manual, lengthy, expensive, revenue-risking, and fault-prone process.<br> <br> So, 2014 will be a year of sorting out how this virtualization/SDN will be implemented in next-generation optical networks that are just coming online. The potential is there to enable services to be turned up much faster, operators with less training to use mouse clicks instead of engineering processes to do their jobs, faults to be accommodated immediately, and in general, to do much more with much less.<br> <br> The chief obstacle to this virtualization trend is the cautiousness with which the big carriers will approach this software development, control-plane integration, and increased level of control-plane management of their networks. It is a shifting paradigm, like convincing a pilot to move from flying with a control stick to “flying by wire.” 2014 will see a big ramp-up for rollouts, but implementing virtualization will come in fits and starts.<br> <br> <b>Brandon Collings, Ph.D,</b> is CTO within the Communications and Commercial Optical Products business unit of <a target="_blank" href="" adhocenable="false">JDSU</a>.</p> http://localhost:4503/content/lw/en/blogs/lightwave-guest-blog/2014/02/hot-for-2014-virtualization-in-the-optical-transport-network.html2014-02-10T19:30:00.000Z2014-02-18T21:17:19.319ZData centers become zoosnoemail@noemail.orgJim Theodoras, ADVA Optical Networking<p><img style="float: left; vertical-align: top; margin: 5px;" src="/content/dam/lw/site-images/Jim%2BTheodoras%2Bcropped.jpg" alt="Jim Theodoras, senior director of marketing, ADVA Optical Networking">With the rise of <a adhocenable="false" href="" target="_blank">Hadoop</a> in conjunction with the rise of the <a adhocenable="false" href="" target="_blank">mega data center</a>, the <a adhocenable="false" href="" target="_blank">elephant</a> has become the symbol of a <a adhocenable="false" href="" target="_blank">new world order</a> in database management. Death to serial, indexed, or centralized <a adhocenable="false" href="" target="_blank">databases</a>! All anyone really needs is to fling data haphazardly as fast as you can onto banks upon banks of commoditized servers. When you need to find something again, all you have to do is search (<a adhocenable="false" href="" target="_blank">MapReduce</a>).<br> <br> Admirably, this approach has scaled well, allowing Google to map the entire globe, Facebook to connect over 1 billion people, and Twitter to recently hit 306,000 tweets per second during Miley Cyrus’ VMA Award performance (Sorry, not gonna hotlink that one). But it turns out the “landfill” approach to data management is starting to show signs of strain. It would seem that, much like <a adhocenable="false" href="" target="_blank">my attic storage</a>, if the size of the mess gets big enough, it takes too long to find what you are looking for. Or, in database lingo, once the size of a distributed database gets large enough, launching a MapReduce function is akin to a self-inflicted <a adhocenable="false" href="" target="_blank">Denial-of-Service attack</a>. The noble elephant that once symbolized the Internet revolution has now become but a <a adhocenable="false" href="" target="_blank">white elephant</a>. The solution? Change zoo animals.<br> <br> Enter the <a adhocenable="false" href="" target="_blank">giraffe</a>. Or, in this case, <a adhocenable="false" href="" target="_blank">Giraph</a>. A quiet revolution is underway in large database analysis. In what has to be one of the <a adhocenable="false" href="" target="_blank">“why didn’t I think of that”</a> moments in history, the basic premise is you take all the data points and draw lines for the relationships between them, and they form triangles and vertices that you can now process using graphical tools. Who knew that the same technology used render <a adhocenable="false" href="" target="_blank">Call of Duty</a> in all its glorious awesomeness would someday come in handy in social networks? I am guessing that the giraffe was chosen as its triangular camouflage pattern closely resembles the triangles of a Giraph database.<br> <br> At this year’s second annual GraphLab Workshop held in San Francisco (where else?), the performance metrics being shared were astounding. Facebook announced their move to a graph-based database model coined <a adhocenable="false" href="" target="_blank">TAO</a> (The Associations and Objects) just ahead of the workshop. Once there, they disclosed their graph had grown to over 1 trillion edges, and the ability to do a <a adhocenable="false" href="" target="_blank">“PageRank”</a> in less than 4 minutes using 200 machines. Twitter spoke about their in-house developed open source graph library, <a adhocenable="false" href="" target="_blank">Cassovary</a>, consisting of 35 billion triangles, which they use for “who to follow,” “circle of trust,” and PageRank applications. What would have taken 1636 machines over 7 hours with Hadoop now only takes a single machine a matter of seconds with Cassovary! Microsoft spoke of the ability to perform page rank on 1.4 billion edges in their own Naiad in less than 10 seconds using four machines, dropping to less than half a second using 64 machines. Google talked about using Pregel for PPR, “personalized page rank.” Also presenting were Netflix and Walmart, who both use graphical techniques for their recommendation engines.<br> <br> So, is the giraffe… er, Giraph the web’s savior? Well, judging by the increase in performance being demonstrated, it is certainly going to allow today’s ever-exploding networks to keep growing for the foreseeable future. And, if and when the databases hit another wall in size and performance, they can always turn to <a adhocenable="false" href="" target="_blank">Hippo</a>.<br> <br> <b>Jim Theodoras</b> is senior director of technical marketing at <a adhocenable="false" href="" target="_blank">ADVA Optical Networking</a>.<br> <br> </p> http://localhost:4503/content/lw/en/blogs/lightwave-guest-blog/2013/12/data-centers-become-zoos.html2013-12-04T16:30:00.000Z2013-12-04T17:06:02.130ZActive optical cables for our high-speed timesnoemail@noemail.orgBen Johnson, Fiberon Technologies Inc.<p>T<img style="float: left; vertical-align: top; margin: 5px;" src="/content/dam/lw/online-articles/2013/11/johnson_cropped.jpg" alt="Ben Johnson, account executive, Fiberon Technologies">here is no question that we live in a high-speed world. Our environment is shaped by places that pride themselves on services that emphasize quality, but prioritize speed. From fast food to the way we walk from one place to another, and even how we search for and obtain information, everything is seemingly done as quickly as possible. As technology has caught up to the speeds that at one point were merely desired but are now required, we have seen that to meet this demand you need to have the appropriate software and hardware.<br> <br> So what are active optical cables – and, more importantly, why should you care? Let’s start with the basic definition that you would stumble upon if you happen to plug the term into a search engine: A specialized optical cable that uses electrical-to-optical conversion on the cable ends to improve speed and distance performance of the cable without sacrificing compatibility with standard electrical interfaces.<br> <br> As for the second point, “why should you care?” Well, put simply, active optical cabling is one of the fastest growing technologies in the data center space. As people expect more information to be available at their fingertips, our communications systems will need to be quicker – and active optical cable is one of the best solutions to this challenge.<br> <br> So what do active optical cables specifically bring to the table and why are they the way to go? There are a few things that these cables bring to the table that can have an immediate impact on your network.<br> <br> Primarily, these cables offer both higher bandwidth and a longer reach with a better footprint than current copper cables. When compared to the incumbent copper cables in most cases, active optical cables provide lighter weight, a smaller size, EMI immunity, a lower interconnection loss, and reduced power requirements. It almost seems too good to be true, but active optical cables are one of those technological innovations that make their predecessors look obsolete and unsophisticated.<br> <br> Another element driving the growth of active optical cable use is the expansion of data centers. We are seeing far more “mega datacenters” being constructed, which means the cables connecting the infrastructure must go further than traditionally expected. The other data center trend that is accelerating the active optical cable market is the creation and development of new mid-level servers and switches that are optimized for these cables. For example, according to Shane Kavanaugh, Dell DCS, Dell has recently introduced low-power/high-speed servers with dual 40-Gbps ports. Developments such as these have accelerated and legitimized the development and deployment of more cost-effective QSFP+ 40-Gbps <a href="" adhocenable="false">optical transceivers</a> – including their use and acceptance as interfaces for active optical cables this year and beyond.<br> <br> In closing, let’s look at the numbers. For that is how many responsible decisions must be made every day - especially in the business world. According to the market research and analysis firm LightCounting, in 2012 the active optical cable market grew by a staggering 65%, much greater than their forecast. <a href="" adhocenable="false">They are now predicting</a> that the active optical cabling market will grow 30% to $150 million this year. This increase is in large part due to datacenter managers planning for the future and the growth of the Infiniband market.<br> <br> <b>Ben Johnson</b> is account executive at <a target="_blank" href="" adhocenable="false">Fiberon Technologies Inc.</a> He can be reached at +1-508-616-9500.<br> <br> <br> </p> http://localhost:4503/content/lw/en/blogs/lightwave-guest-blog/2013/11/active-optical-cables-for-our-high-speed-times.html2013-11-01T14:15:00.000Z2013-11-01T18:05:55.338ZEthernet's next 10X leapnoemail@noemail.orgJohn D’Ambrosia, Ethernet Alliance<p><img style="float: left; vertical-align: top; margin: 5px;" src="/content/dam/lw/online-articles/2013/08/John%20D%27Ambrosia_cropped.jpg">There is something comforting in predictability. With Ethernet that predictability has been its rate progression – 10X increments from its initial 10 Mbps to 100 Mbps to 1 Gbps to 10 Gbps with little to no controversy.<br> <br> However, the simultaneous introduction of 40 Gigabit Ethernet (GbE) and 100GbE effectively ended this legacy, as has work on the next speed, 400GbE, currently underway within the IEEE. It now appears that Ethernet has abandoned its 10X increment legacy in favor of a 4X increment, leading to considerable speculation by some regarding the future trajectory of Ethernet. Will the next speed after 400 Gbps be 1.6 Tbps? This would seem to be the case if we were to simply multiply 400 Gbps by 4.<br> <br> A look back at Ethernet’s rate progression will provide some useful insight.<br> </p> <table cellspacing="0" cellpadding="0" border="0"> <tbody><tr><td><img alt="Potential future Ethernet rates" src="/content/dam/lw/online-articles/2013/08/LWgblog081313.jpg"></td> </tr><tr><td style="text-align: center;"><b>Figure 1. The rates of Ethernet.</b></td> </tr></tbody></table> <p><br> Figure 1 provides a view of Ethernet’s past, as well as its potential future. From 10 Mbps to 10 Gbps, Ethernet transmissions leveraged serial optical solutions. In other words, the underlying signaling technology equaled the rate of Ethernet. Beyond 10-Gbps Ethernet, however, things took a different direction, as both 40-Gbps Ethernet and 100-Gbps Ethernet were achieved through parallelization: 40-Gbps Ethernet via four channels of 10 Gbps and 100-Gbps Ethernet via 10 channels of 10 Gbps or four channels of 25 Gbps. (It is clear that the reuse of 10-Gbps optical technology also played a role.) The only exception to this use of parallelization to achieve higher speeds was the development of a serial 40-Gbps Ethernet specification, which was developed to be optically compatible with existing carrier 40-Gbps client interfaces (OTU3/STM-256/OC-768/40G Packet over SONET/SDH). Optically speaking, multimode fiber parallelization was accomplished via multiple fibers, while WDM technology and multiple lambdas provided the parallelization mechanism for singlemode fiber.<br> <br> It might be anticipated that all initial Ethernet solutions in the future will employ some sort of parallelization as well. However, as shown in Figure 1, employing parallelization with 25-Gbps or 40-Gbps optical signaling will probably only scale to 400 Gbps given the width of the parallel optical solution.<br> <br> The IEEE 802.3 Ethernet Bandwidth Assessment forecasted that by 2015, on average, networks will need to support terabit capacities in their core and will continue to grow to 10 Tbps by 2020. Therefore, 400-Gbps interfaces based on 25-Gbps and 40-Gbps optical signaling will be transitional in nature until Terabit Ethernet solutions arrive to deal with the multi-terabit capacities. And it will be the introduction of 100-Gbps optical signaling combined with parallelization techniques that will enable support of this next phase of Ethernet and the exponential bandwidth growth of the future. (Of course, there will also need to be a corresponding jump to 100-Gbps electrical signaling per electrical lane as well.)<br> <br> However, the future of 100-Gbps optical signaling technology is unclear at this time, as it will be a significant undertaking to develop.<br> <br> In fact, while it may be easy to envision low-cost 100-Gbps Ethernet solutions that scale to 400 Gbps and ultimately reach 1 Tbps or even 1.6 Tbps, the entire endeavor will be a significant undertaking. Nonetheless, as we contemplate the future, it is easy to understand how one might argue that Ethernet’s historical 10X increase has been about the basic signaling rate, not the next rate of Ethernet.<br> <br> Stated another way, the traditional Ethernet 10X increase from 100 Gigabit Ethernet has not yet happened.<br> <br> <b>John D'Ambrosia</b> is chairman of the <a adhocenable="false" href="" target="_blank">Ethernet Alliance</a>. He is also a distinguished engineer at Dell.<br> <br> </p> http://localhost:4503/content/lw/en/blogs/lightwave-guest-blog/2013/08/ethernets-next-10x-leap.html2013-08-13T15:45:00.000Z2013-08-13T17:37:36.313ZA question of Ethernetnoemail@noemail.orgJim Theodoras<p><img style="float: left; vertical-align: top; margin: 5px;" src="/content/dam/lw/site-images/Jim%2BTheodoras%2Bcropped.jpg">Well, <a href="" adhocenable="false">that</a> turned out to be about as subtle as a porcupine in a balloon shop.<br> <br> The intent of my previous post was not to belittle or berate current industry efforts around the future of higher-speed Ethernet but, rather, to trigger a more profound discussion on the long-term future of the technology. And, from that perspective, the blog appears to have worked, in that there has been no shortage of meaningful discussion over both coffee and libations since.<br> <br> Discussions have gone as far as questioning the need to always have data rates with neat factors of 10. Where is the dividing line between technically derived parameters and basic human psychology? The very essence of the scientific process is to remove the human element from experiments. Yet, here we are centuries later, making Ethernet speed steps in neat orders of 10. Would not an end customer be just as happy with a 101.3G connection as a <a adhocenable="false" href="">100G</a>?<br> <br> Early information transport protocols were not neat factors of 10 but rather multiples of the minimum bit rate needed to carry a single voice call. Computer science practitioners abandoned Base 10 ages ago in favor of Base 2 arithmetic. Since nearly all electro-optic subcomponents now come in “quads,” including ICs, laser bars, detector diode arrays, MTP ferrules, and ribbon cables, should Ethernet now “grow by 4”?<br> <br> The rise of datacom applications was the reason given for the split 40/100G data rate of the last Ethernet speed step (40G for datacom and 100G for telecom). After all, much of the ballyhooed growth of Ethernet has been due to the interconnection of compute, storage, and switching hardware within data centers. Yet, if datacom needed the 40G step last time, why would 160G not make more sense this time? Should datacom be on a different growth path, or was this a one-time event? The Ethernet ecosphere does not necessarily have the resources to develop more than one speed at a time. So should one growth path be chosen? And, if so, which one, datacom or telecom?<br> <br> Some industry thought leaders have started to delve even deeper into the conundrum, with a holistic examination of the data center. Most of the work in a data center is done by compute and storage resources. The network is the information highway, and the switches are the traffic managers. The compute and storage devices predominately have PCIe interfaces, while the traffic cops have Ethernet. When looking at the big picture, it becomes apparent that it is extremely inefficient for Ethernet conversions to occur at every port of every device, simply to make it through a top-of-rack switch. Some industry leaders, including the CxOs of companies involved, have begun to publically opine about pushing Ethernet further toward the edge of data centers and making racks, rows, or even entire floors PCIe-only.<br> <br> Of course, on the flip side, both incumbents and startups for a while now have been toying with Ethernet-interfaced memories and storage. While such logic makes sense on the surface, one must remember that each protocol tends to be best for that which it was originally designed, and both efforts stretch the protocols beyond their intended use unless new extensions are made.<br> <br> So, back to the original topic of what speed steps make sense. Let’s look at the telecom side of things.<br> <br> One industry stalwart I recently ran into was quick to point out that he is more worried about the network side of the equation. Client port densities had lagged at <a adhocenable="false" href="">100GbE</a> due to the unwieldy size of the CFP module. As QSFP28/CFP4 will dramatically increase client-side port densities, the pendulum now swings back to the network side to keep up.<br> <br> The problem is the fiber is filling up. Early in the days of optical communications, a fiber had so much bandwidth that it seemed impossible to ever completely fill it. A great leap in capacity came with adding colors with WDM. And now that modulation bandwidths have exceeded channel widths, we’ve reached an asymptote in growth in capacity (at least on a single fiber). Future increases in capacity will now be due to moving to higher-order modulations (from NRZ, to ODB, <a adhocenable="false" href="">DPSK</a>, <a adhocenable="false" href="">DP-QPSK</a>, <a adhocenable="false" href="">16-QAM</a>, etc.) that have better spectral density—a much slower growth proposition than in the past. You can talk “superchannels” all you want (and getting rid of wasteful guard bands does help), but once the fiber is full, it is full. Potential solutions involve more emphasis on L-Band, a re-examination of the S-Band or simply using more fibers (the average underground cable has 144, after all). Perhaps some yet-unforeseen technology will rise out of the shadows to solve this problem, such as modal division multiplexing. However, there are currently no technical saviors in sight.<br> <br> Admittedly, this blog is more question than content, but shouldn’t we be stepping back from the day-to-day routine and delving into these questions? Sometimes engineers (myself included) fall into the trap of simply going through the same motions, as if trapped in the movie Groundhog Day. What is the right speed curve for Ethernet? Does the data center need its own curve, or perhaps even its own protocol? Will client-port speeds outpace network-side speeds from here on out?<br> <br> I do not claim to have all the answers, though a good first step is to raise one’s hand and start asking the questions.<br> <br> <b>Jim Theodoras</b> is senior director of technical marketing at <a target="_blank" href="" adhocenable="false">ADVA Optical Networking</a>.<br> <br> </p> http://localhost:4503/content/lw/en/blogs/lightwave-guest-blog/2013/06/a-question-of-ethernet.html2013-06-25T17:30:00.000Z2013-10-03T06:05:00.567ZEthernet runs out of steamnoemail@noemail.orgJim Theodoras, ADVA Optical Networking<p><img style="float: left; vertical-align: top; margin: 5px;" src="/content/dam/lw/site-images/Jim%2BTheodoras%2Bcropped.jpg">Big news on the higher-speed Ethernet front. No, not that work has begun on 400 Gigabit Ethernet (GbE), thus setting the bar low, but rather the intention to go for 1.6 Terabit Ethernet (TbE) afterward. Yes, that’s right. After decades of advancing Ethernet in increments of 10, we will now have to settle for a mere quadrupling of speeds each standards cycle. No word on whether the timeline of each cycle will shorten commensurately, though I have my doubts.<br> <br> So why the sudden change? Surely for such a drastic turnabout to occur there must be an earthshattering reason. Perhaps going for TbE and 10TbE thereafter would rip the very time/space continuum of the universe itself!<br> <br> It turns out the reason for the biggest shift in the laws of Ethernet since its inception is… wait for it… <i>It is hard</i>. That’s right. The very people who brought you Ethernet, the Internet, and surfing WiFi at Starbucks think TbE might be hard.<br> <br> I seem to remember no one had a clue how 100GbE might be accomplished when that effort was started. 100G coherent detection is one of, if not the hardest Ethernet technologies ever attempted. 10GBase-T’s computational power would have made a supercomputer blush only a decade ago. And both seem to work just fine, thank you. In fact, if you look at public minutes of all the past higher-speed Ethernet efforts (10GbE, 1GbE, 100M, etc.), the study groups have all had moments of panic and doubt when facing the next big step. All it took to overcome the challenge was the greatest minds in the industry working together as a team toward the betterment of the industry – the very reason standards bodies exist and meet regularly in the first place.<br> <br> OK, in all fairness, the real reason being given for a shift in strategy is the exploding growth of bandwidth consumption, and the belief that it is better to have a 4X improvement sooner rather than a 10X bump later. An executive of a North American Tier1 service provider recently confided to me, “We gave our OK to 400GbE over 1TbE because we were told we could get it sooner.” Of course, this all depends on the 4X actually arriving sooner, and since 400GbE heavily leverages technology being developed for second-generation 100GbE, this seams a reasonable assumption.<br> <br> A bigger challenge for 400GbE might be market economics, as it will have to compete with the aforementioned reinvigorated 100GbE. Service providers just got done ripping out all of their regenerators and dispersion compensating fiber (DCF) spools in favor of 100GbE <a adhocenable="false" href="">coherent transmission</a>. Now comes word that 16-QAM 400GbE might need the regen’s that were just yanked. A tough sell, to say the least.<br> <br> Have electrical speeds, bus widths, and laser modulation techniques really run out of steam? Are we doomed to a future of 4X Ethernet increments? Perhaps Fibre Channel had it right all along. Or maybe the ITU has it right with ODUflex. The only thing certain at this point is Ethernet is a-changing. Again, to be fair, the next rate after 400GbE will not be officially decided for years to come. Only time will tell if this was merely a brief detour for Ethernet, or a permanent change in compass heading.<br> <b><br> Jim Theodoras</b> is senior director of technical marketing at <a target="_blank" href="" adhocenable="false">ADVA Optical Networking.</a><br> <br> </p> http://localhost:4503/content/lw/en/blogs/lightwave-guest-blog/2013/01/Ethernet-runs-out-of-steam.html2013-02-01T20:30:00.000Z2013-02-26T22:19:55.730ZAir in epoxiesnoemail@noemail.orgBarry Siroka, Fiber Optic Center<p><img src="/content/dam/lw/online-articles/2012/08/LW_siroka0812.jpg" style="float: left; vertical-align: top; margin: 5px;">Air bubbles in cured epoxies can be a major problem, especially in small applications such as fiber-optic terminations and optical bonding. A bubble can leave a void at the substrate surface, thus reducing the amount of epoxy at the interface. Less surface area with the adhesive could diminish the overall bond strength. Bubbles in the epoxy can also affect the amount of adhesive being dispensed if a pneumatic dispenser is used. <br> <br> Epoxies do not create gasses when they cure or react. Air is only introduced when the material is mixed, so we need to find good methods to remove any bubbles after the resin and hardener are mixed together.<br> <br> The oldest traditional method to remove air is to place the mixed material into a dish and put the dish into a vacuum chamber. The dish should be a large as possible to yield the most surface area, and ideally a vacuum of 10 -6 Torr (30 inches of mercury) is used. As air is pulled out, the material will &quot;foam&quot; and a head is created. The head will rise and break, thus indicating that most of the air has been removed. The amount of time needed for this to occur will depend on the nature of the specific epoxy. Materials of higher viscosity or that are more thixotropic (those that don’t flow without pressure) will take longer to “de-air.”<br> <br> There are several problems with this technique, however. It can take up to 20 minutes to de-air some materials, which will use up valuable working time. Also, it is possible that some lower volatility constituents of the adhesive could be pulled out with the air, thus leaving a material that may not produce maximum performance. Once the air is removed, the adhesive may have to be transferred to a dispenser. During the transfer, air could be reintroduced.<br> <br> A better method, for most applications, is to use a centrifuge. Here, mixed material is placed into a syringe. The capped syringe is centrifuged between 3000 and 3500 RPM for 3-5 minutes (high-viscosity material may require a little longer time). All the air condenses into one air bubble that can easily be ejected from the syringe. The syringe can then be used for dispensing without the need for material transfer.<br> <br> The centrifuge method is by far faster and more complete -- but there are some epoxies that cannot use this method. The resin and hardener of certain chemistries or fillers in the epoxy could separate while being centrifuged. Check with the adhesive supplier to be sure. <br> <br> <b>Barry Siroka</b> is responsible for business development of polymers at <a href="" target="_blank">Fiber Optic Center</a>. This is the first in a series of guest blogs he will write on adhesives for high-tech applications. He can be reached via email <a href="" target="_blank">here</a>.</p> http://localhost:4503/content/lw/en/blogs/lightwave-guest-blog/2012/12/Air-in-epoxies.html2012-12-13T17:45:00.000Z2013-02-26T22:31:52.513ZSDN: Mirage or revolution?noemail@noemail.orgJim Theodoras, ADVA Optical Networking<p><img src="/content/dam/lw/site-images/Jim%2BTheodoras%2Bcropped.jpg" style="float: left; vertical-align: top; margin: 5px;">Software-defined networking (SDN) is such an all-encompassing term that it is difficult to define, and discussions on the subject among colleagues (especially when they are engineers) quickly digress into disagreement. Like a mirage, each proponent sees what they want to see in SDN. Unlike a mirage, there’s an element of reality to all of these conflicting visions.<br> <br> To some, SDN is a way of controlling networks in the most efficient way possible. The basic idea is to use an external central controller to manage a network of switches and routers. Since the central controller sees the whole picture, it can make decisions that are best for the whole network, rather than decisions optimized for local conditions. While network controllers are already in use today, what makes SDN different is that the controller has access to the forwarding engine of each router/switch, giving it direct control over the path packets take. OpenFlow is currently the dominant protocol used to bridge the data and control planes in an SDN implementation.<br> <br> To others, SDN is a way of letting packet-based networks better handle today’s predominantly flow-based traffic, such as over-the-top (OTT) video. The renewed impetus for forging down the path of SDN comes as the types of traffic being carried by today’s networks shift from discrete packets to flows of packets. While this sounds similar, the difference between the two is more than academic.<br> <br> Data flows are continuous streams of data with a start and end point, and a common source and destination. They differ from packets in that once a router/switch port is dedicated to a flow, often that port is tied up until the flow is terminated. Routers were optimized to be packet routers, yet today they are being used as flow routers, a poor fit to say the least.<br> <br> Running an SDN architecture enables information on all the flows passing through a router to be collected into a flow table in the network controller. Flows then can be acted upon as a group using wildcards. For example, if the flow controller identifies a bunch of similar flows, they can be grouped together as a locally cached group session. <br> <br> To still others, SDN promises the commoditization of switching/routing hardware, similar to what has occurred with servers. While SDN was conceived as a way to make the Internet work better, ironically internal data center networks have been the catalyst for wider adoption and development. In trying to manage their massive internal networks and improve their efficiency, data center operators have turned to SDN architecture and the OpenFlow protocol to control them, and why not? Their computing, server, and storage resources are already virtualized. The only thing within the walls of their data centers that has not been virtualized is the network itself, and the main hurdle in virtualizing networks has been access to the forwarding plane. With access to the forwarding plane, each time a packet address is looked up, it can be simultaneously looked up in a flow table. If the packet happens to be the first packet of a flow, it is handed off to the flow controller.<br> <br> As more data centers have popped up around the globe, operators are now looking to extend their use of OpenFlow beyond individual sites to the physical transport networks that interconnect them. The physical transport layer of the network itself has become a complex switching element, just like packet switches, with directions, fibers, and colors as degrees of freedom. Moving forward, flexible superchannels and variable bit-rate modulations add two more degrees of freedom. The beauty of SDN is that it can be abstracted and decoupled from the complexity and specific configuration of the switch element. That way, it works just as well with physical layers as it does the switching and network layers. <br> <br> While no one can predict the SDN endgame, we are at the cusp of a revolution in the way global networks are designed, built, and managed.<br> <b><br> Jim Theodoras</b> is senior director of technical marketing at <a href="" target="_blank">ADVA Optical Networking.</a><br> <br> </p> http://localhost:4503/content/lw/en/blogs/lightwave-guest-blog/2012/11/SDN-mirage-or-revolution.html2012-11-06T19:30:00.000Z2013-02-26T22:32:07.149ZChoosing specialty adhesivesnoemail@noemail.orgBarry Siroka, Fiber Optic Center<table cellspacing="0" cellpadding="1" border="0" align="left"> <tbody><tr><td><a href="/content/dam/lw/online-articles/2012/08/LW_siroka0812.jpg"><img src="/content/dam/lw/online-articles/2012/08/LW_siroka0812.jpg"></a></td> <td>&nbsp;&nbsp;&nbsp;</td> </tr></tbody></table> <p>Adhesives are a small but very important part of many assembly operations. There are many different chemistries and formula variations available, so the selection of an adhesive can be difficult. To narrow the choices, there are a number of parameters to be considered. <br> <br> The first and most important is to know the substrates to be bonded. The terms &quot;metal&quot; or &quot;plastic&quot; are not specific enough to determine the best adhesive. Some metals are highly oxidative and many plastics have low surface tension, making them more difficult to bond. These different materials may require alternative surface preparations. So we also need to understand what types of surface preparation can be performed. <br> <br> Processing capability needs to be considered. Can UV light or heat be used to cure the adhesive? If heat can be used, what is the maximum exposure temperature that the unit can withstand during the curing operation? Also, is there a preferred viscosity?<br> <br> The bond gap between the substrates is extremely important in determining the adhesive. Some chemistries work best with very small bond gaps, while others must be much thicker to obtain a strong bond. <br> <br> Understanding the purpose of the adhesive is also very important. For example, an optical application may need a high transmission rate or specific refractive index if the adhesive is in the optical pathway. During use, will the adhesive need to perform at elevated temperatures or undergo temperature cycling? Will the part be exposed to any unusual environmental conditions such as high moisture, chemicals, or high vacuum? <br> <br> Additionally, we need to know if the adhesive needs to be a permanent part of the unit or just a short duration material. Some adhesive systems will work well initially and then degrade over time. Others will last indefinitely.<br> <br> In future writings, I will discuss these issues in more detail. Please remember that no one adhesive is good for every application. Please feel free to write me at with any questions.<br> <br> <b>Barry Siroka</b> is responsible for business development of polymers at <a href="" target="_blank">Fiber Optic Center.</a> This is the first in a series of guest blogs he will write on adhesives for high-tech applications. He can be reached via email <a href="" target="_blank">here</a>.<br> <br> <br> </p> http://localhost:4503/content/lw/en/blogs/lightwave-guest-blog/2012/08/choosing-specialty-adhesives.html2012-08-31T14:55:09.000Z2013-02-26T22:32:23.739ZIn-flight encryption: A critical service opportunity for network service providersnoemail@noemail.orgMalcolm Loro<table cellspacing="0" cellpadding="1" border="0" align="left"> <tbody><tr><td><img src="/content/dam/lw/online-articles/2012/07/LWLoro_croppedv2.jpg"></td> <td>&nbsp;</td> </tr></tbody></table> <p>According to the latest information, data security breaches are on the rise in the private sector: <a href="" target="_blank">58 percent more breaches were reported</a> to the Information Commissioner’s Office (ICO) in 2011/12 than in the same period last year. In the report <a href="" target="_blank">“2010 U.S. Cost of a Data Breach,”</a> the Ponemon Institute estimated the cost of dealing with an incident in 2010 had risen to $7.2 million (compared to $1.5 million in 2005).<br> <br> The stakes are high for business and government entities, and we’re seeing increasing responses to this escalating threat, particularly through additional security mandates and regulations. Organizations today have to abide by tougher compliance legislation – especially in sectors such as finance, healthcare, pharmaceuticals, and manufacturing. And increasing national, international, and trade group regulation is forcing companies to monitor their levels of compliance continuously. <br> <br> Rethinking security is becoming a top CIO priority, and organizations are going to great lengths to protect the “at-rest” information stored in their data centers from unauthorized access. IT managers and CIOs are using an array of techniques intended to lock down critical IT infrastructure, including servers, databases, routers, and switches, by managing user access and credentialing.<br> <br> In the majority of cases, however, the need to ensure secure communications is necessary beyond the walls of the data center. As increasing volumes of sensitive information are distributed across global fiber-optic networks, a comprehensive IT security approach must now encompass not just at-rest data security, but also “in-flight” data security to protect information as it travels outside the confines of the enterprise. Fortunately, sophisticated in-flight encryption techniques can camouflage traffic so it cannot be read or manipulated, and can even disguise the fact that there is traffic flowing at all.<br> <br> When considering in-flight encryption strategies, one of the first questions is where to encrypt:</p> <ol> <li>Protect at the application layer, or</li> <li>Protect at the network transport layer.<br> </li> </ol> <p>As many applications in an enterprise network use IP (network Layer 3) for data transfer and communication, application-level IP encryption is viewed as the most logical choice. In this approach, data is already encrypted when it reaches the optical network elements to be transmitted to another location. With use of the right encryption standards, this approach provides sufficient security for many IT applications, mainly those that are not data-intensive or time-sensitive.<br> <br> However, with some critical enterprise IT applications, such as real-time disk mirroring for business continuity/disaster recovery or time-sensitive voice or video data transfer, Layer 3 encryption can actually negatively affect operational efficiency. Sizeable overhead is often added to the payload data packets, effectively reducing the operational data throughput. Further, the encryption process contributes considerable latency to the data transfer, which can adversely affect higher-level applications and create severe performance degradation.<br> <br> That is where the benefits of a lower-layer optical transport encryption solution kick in. While not necessary for all IT applications, for those more bandwidth-intensive or time-sensitive, a well-devised and properly implemented encryption solution integrated at the transport layer eliminates application delays while adhering to the highest security standards. <br> <br> Protocol transparency is another key consideration. Enterprise networks are constantly evolving – this means that services that run over them today will probably be different from those offered in the future. Therefore, it is important that service providers select technology that supports protocol-agnostic encryption so they have the flexibility to support a variety of transport types.<br> <br> Additionally, deploying encryption solutions at the application layer can be expensive. Individual traffic streams require individual encryption devices often specific to the protocol involved, and multiple ports on each WAN network element are consumed, adding to the cost and complexity. <br> <br> With transport layer encryption, also referred to as “bulk” encryption, the entire traffic stream is encrypted, overheads and all, rather than individual applications. This eliminates the need for complicated frame checks and modifications to associated overhead, and provides 100 percent throughput transport and seamless interworking across multi-vendor networks.<br> <br> This approach presents a significant opportunity for network service providers to offer carrier-managed network encryption. With the right transport solution, incorporating a Federal Information Processing Standards (FIPS)-compliant encryption engine and encryption key management tools to enable their customers to control and monitor the security of their network, service providers can:<br> </p> <p style="margin-left: 40px;"><b>-</b> increase customer retention and loyalty<br> <b>-</b> differentiate service offerings, increase margins and move up the value chain from circuit revenue<br> <b>-</b> attract new customers in key verticals that invest in security products today, including financial services, healthcare, government, military, and technology organizations.<br> </p> <p>Weakness in network security has come to haunt enterprises in the forms of massive fines and badly damaged reputations. Network-level “encryption-as-a-service” is poised to become a key ingredient in enterprise IT security efforts to safeguard critical business information when it leaves the building. Carriers that offer in-flight encryption will have a leg up in attracting customers that transport highly sensitive information. <br> <br> <b>Malcolm Loro</b> is director, enterprise solutions, at <a href="" target="_blank">Ciena Corp.</a> He is responsible for driving Ciena’s market strategies for Carrier Managed Service solutions and Enterprise Private Network solutions that address key business challenges in markets such as financial services, healthcare, utility, media &amp; entertainment and the public sector.<br> <br> </p> http://localhost:4503/content/lw/en/blogs/lightwave-guest-blog/2012/07/In-flight-encryption-A-critical-service-opportunity-for-network-service-providers.html2012-07-19T20:04:00.000Z2013-02-26T22:32:39.347ZAs the pendulum swingsnoemail@noemail.orgJim Theodoras, ADVA Optical Networking<table cellspacing="0" cellpadding="1" border="0" align="left"> <tbody><tr><td><img src="/content/dam/lw/site-images/Jim%2BTheodoras%2Bcropped.jpg"></td> <td>&nbsp;</td> </tr></tbody></table> <p>Almost two decades ago, the optical communications business was dominated by companies with strong research labs that actually invented the intellectual property that made their systems work. But in all maturing industries, as the market grows, the ecosystem splits into smaller segments. In this case, optical component companies sprang up, offering value-add to smaller systems companies that maybe did not have the breadth or depth to master the electro-optics piece of the equation. After all, dealing with fiber splicing, S-parameters, diode pumping, or thermo-electric cooling was esoteric stuff back then. At first, it was subsystems, with engineers leaving to form design houses that delivered linecards or large modules (fiber amplifiers, transponders, multiplexers). Gradually the subsystems market subdivided into even smaller elements, eventually reaching the point that TOSAs, ROSAs, DFB lasers, isolators, etc., were all just a phone call away.<br> <br> With the advent of the GBIC came the start of the pluggable optic revolution, and a long line of standard form factors developed through cooperative multiple source agreements (SFP, XFP, QSFP, etc). Now, not only did optical communication system builders not need to understand the optics, they could completely remove them from their systems altogether. Since their systems were comparison shopped using the metric of dollars/port, this sleight-of-hand allowed them to completely remove optical costs from their systems’ cost per port rollup as well. A key cost of their systems was (and is still today) hidden from end customers.<br> <br> Along with the pluggable optics revolution came the rise of large optical transceiver powerhouses. Those optical component companies that jumped on the pluggable transceiver bandwagon quickly grew, while those that stuck to their elemental component roots stumbled, either scrambling to buy available pluggable startups or fading into oblivion. With such a simple business model that promised easy profits, pluggable transceiver companies sprang up everywhere. And business cycles being what they are, the number of suppliers eventually outnumbered the number of customers.<br> <br> Fast forward to today, and we are in the middle of a consolidation of optical component suppliers. For example, Bookham and Avanex, themselves an amalgam of smaller companies, merged to form Oclaro, which will soon <a href="/content/lw/en/articles/2012/03/oclaro-opnext-to-merge.html">merge with Opnext</a>. The ecosystem that fragmented into many segments is recombining into fewer companies that offer a wider range of products.<br> <br> Basic business theory suggests there eventually will be only a few dominant players (There are currently four: Finisar, Oclaro/Opnext, JDSU, and Sumitomo.) And as the optical component suppliers grow even larger, they begin to face the same challenges that made system houses turn to them for optics in the first place, such as shareholders to please. They risk long-term strategic thinking and internal innovation gradually giving way to quarterly bean counting and external IP acquisition. Some have even quietly begun offering to OEM complete optical communication systems. Why not -- they now have all the pieces anyway.<br> <br> But is there more to what is happening than meets the eye? Perhaps we are actually returning to the days of vertically integrated optical communication giants from whence we came. It seems almost unfathomable, yet there are signs everywhere. One large communication system company has consistently messaged that they do not want to be in the optical component business. After all, when you are used to margins over 60%, why would you want to enter a business with single-digit margins? Yet, over the last couple years this company has been consistently buying up optical technology companies (see <a href="/content/lw/en/articles/2010/05/cisco-to-acquire-coreoptics-94447114.html">“Cisco to acquire CoreOptics”</a> and <a href="/content/lw/en/articles/2012/02/cisco-to-acquire-cmos-silicon-photonics-firm-lightwire.html">“Cisco to acquire CMOS silicon photonics firm Lightwire”</a>), and other large system vendors have quickly followed suit. Perhaps this is a necessary defensive reaction to <a href="/content/lw/en/articles/2012/02/delloro-dwdm-market-up-19-in-2011.html">the rise of state-backed communication behemoths</a> around the world. <br> <br> Or, perhaps this new reality reflects the advancement of pluggable optics to the point where supporting them is once again so complex that only large companies with technical depth can be successful. Anyone who has tried to mass produce a 10GBASE-LRM compliant XFI bus understands the difficulty; <a href="/content/lw/en/articles/2012/05/oif-launches-56-gbps-electrical-interface-projects.html">the OIF’s latest interface bus</a> will support data rates up to and including 56 Gbps. Just the receiver in the industry-standard 100G interface requires a level of computation power that would have given a supercomputer bragging rights just a few years ago (see <a href="/content/lw/en/articles/2010/05/oif-approves-100g-coherent-receiver-implementation-agreement--93309559.html">“OIF approves 100G coherent receiver implementation agreement”</a>). The industry’s move to variable QAM rates to support the next speed steps only makes matters worse.<br> <br> And so the pendulum swings. As with all businesses, there are cycles. Arguably, the big push over a decade ago to break up vertically integrated companies into a complex supply chain might have been a bit overblown. The optical supply chain fragmented into too many small vendors without sustainable business models.&nbsp; And now as the race toward consolidation and vertical integration appears to be picking up speed, we risk overdoing it again. Having too few optical vendors will risk the innovation and price competition that comes with a diverse supply base.<br> <br> <b>Jim Theodoras</b> is senior director of technical marketing at <a href="" target="_blank">ADVA Optical Networking</a> and a past president of the <a target="_blank" href="">Ethernet Alliance</a>.<br> <br> <br> </p> http://localhost:4503/content/lw/en/blogs/lightwave-guest-blog/2012/06/as-the-pendulum-swings.html2012-06-19T20:04:00.000Z2013-02-26T22:32:54.069ZSwimming in 100GbE’s alphabet soupnoemail@noemail.orgJim Theodoras <br>ADVA Optical Networking<p>Recently, there has been a lot of confusion around the area of next-generation 100-Gigabit Ethernet (100GbE) development and who is doing what. Given the alphabet soup of acronyms one has to wade through when dealing with standards bodies and multisource agreements (MSA), such confusion perhaps is not surprising. For example, I have recently seen multiple reputable news sources report that IEEE 802.3bj is going to be working on 400GbE! Given the fact that “100GbE” and “four-lane” were bandied about, one could perhaps understand the faux pas. Since this is by no means the only example of confusion out there, let’s do a quick rundown of some recent 100GbE announcements and what they really mean.<br> <br> Let’s start with the aforementioned IEEE 802.3bj. According to the effort’s chair, John D’Ambrosia, chief Ethernet evangelist within the CTO Office at Dell, “The IEEE 802.3bj is currently in task force status, with a mission to define next-generation 100GE copper interfaces for backplanes and client ports.” Notice he said copper. As optical client ports have leaped to 40- and 100GbE capacities, it seems the bottleneck has moved once again to the backplanes that interconnect them. Moreover, denser 40/100GbE client ports are on the horizon, and backplanes must scale to leverage the new capacity.<br> <br> That is not to say backplanes are the only focus. It turns out backplane technologies work great for very short reach copper cable client connections in applications where even the cost of an optical alignment ferrule would be prohibitive. The end goal appears to be 1 m on a backplane and 5 m in a cable, with both using four parallel lanes of 25 Gbps in place of today’s 10 lanes of 10 Gbps.<br> <br> Wait, did I just say four lanes of 25G? Does that mean the end of the 10X10 MSA? Of course not -- the two efforts are totally unrelated. IEEE 802.3bj is working on backplane interconnect and client ports, not the electrical interface to optical modules, though admittedly it could be re-tasked for such in a pinch. In fact, the CFP MSA recently amended the electrical connector pin-out definition for their CFP2 module to increase the number of pins so that a 10-wide bus could be accommodated, though a 4:10 reverse multiplexer appears to be the preferred approach at the moment.<br> <br> According to the <a href="" target="_blank">10x10 MSA</a>, “The 10X10 MSA set of solutions has set a new price point for 100 Gigabit Ethernet and thousands of ports have shipped because of their low cost. When the industry goes to 4x25Gbps electrical signaling in the CFP2, a reverse gearbox can be used to break out the lanes into 10 lanes of 10G to support the low cost 10G lasers and receivers.”<br> <br> And speaking of CFPs, the CFP package was the first generation of 100GbE pluggable optical modules. The CFP was developed by the <a href="" target="_blank">CFP MSA</a>, and has been a great success for 100GbE as a whole as it allowed the industry to focus all of its investments on a single form factor. However, like all first-generation optical modules, it was rather large, with four being about the most you could fit onto the faceplate of a line card. Not one to rest on their laurels, “The CFP MSA is now working on two next generation form factors, CFP2 and CFP4, to double and quadruple the front panel density, respectively,“ explains Chris Cole, CFP MSA spokesperson. <br> <br> Similarly, the IEEE has formed a Study Group with an eye on higher-density 100GbE. According to Dan Dove, senior director of technology at AppliedMicro and chair of the IEEE 802.3 Next Generation Optical Study Group, “We are currently in the Study Group phase, investigating next generation 100G optical PMD alternatives with a goal to reduce the cost, power, and size required for 100G links.”<br> <br> So, what of the aforementioned 400GbE? It turns out, before moving to the next speed step, the thought leaders in Ethernet have been striving to better understand the bandwidth needs of end users, as well as bandwidth consumption models and trends. Sometimes you don’t just climb a mountain because it is there. John D’Ambrosia also leads the IEEE 802.3 Industry Connections Ethernet Bandwidth Assessment Ad Hoc which has been meeting regularly to discuss these topics, as well as listen to invited speakers from some of the largest bandwidth providers/consumers today. <br> <br> The need to better understand all aspects of bandwidth is universal and an important prerequisite to the move to even higher data speeds. To help foster discussion, the Ethernet Alliance will be centering its next Technology Exploration Forum (TEF) on this very topic, <a href="" target="_blank">“The End User Speaks!”</a><br> <br> So, hopefully, this clears things up. And before being hasty and jumping to the next speed plateau of Ethernet, it appears there is plenty of work still to be done with 100GbE. <br> <br> Jim Theodoras is senior director of technical marketing at <a href="" target="_blank">ADVA Optical Networking</a> and a past president of the <a href="" target="_blank">Ethernet Alliance</a>.</p> http://localhost:4503/content/lw/en/blogs/lightwave-guest-blog/2012/01/swimming_in_100gbes.html2012-01-11T21:49:00.000Z2013-02-26T22:33:08.476ZWhat’s happening with bend-insensitive multimode fiber?noemail@noemail.orgDavid Mazzarese<p>Multimode fiber is replacing copper as the connectivity option of choice in high-end data centers. With 10 Gbps as the preferred method of transmission, OM3 and OM4 multimode fibers have proven to be the most cost-effective way to transmit high data rates over the 10 to 550 meters typically seen in data centers. Meanwhile, data center links have begun to migrate from 10 Gbps to 100 Gbps as 10-Gbps server ports have become available.<br /> <br /> It has become apparent that meeting these challenges will require parallel data transmission using multiple fibers for the 100-Gbps data stream. The recent IEEE 802.3ba Ethernet standard calls for 10 multimode fibers transmitting 10-Gbps each to reach 100 Gbps. The standard interface is a 24-fiber or two 12-fiber MPO connectors, so a 100-Gbps link typically requires 24-fiber cable.<br /> <br /> TIA 42.12, IEC SC86A and ISO/IEC SC25 have recently developed OM4 multimode fiber standards to support these new applications. Market acceptance of OM4 fiber has been explosive since the adoption of IEEE 802.3, which included OM4 fiber as one of the approved media types. <br /> <br /> What will follow OM4? One candidate is bend-insensitive multimode fiber (BI-MMF). These products were first introduced as a way to improve cable management in large data centers. Discussions about BI-MMF in standards groups began in April 2010. Since then, it has become apparent that the specifications for standard OM3 and OM4 fiber are not sufficient for these new bend-insensitive multimode fiber designs. Studies show that several &ldquo;leaky&rdquo; mode groups propagate up to several hundred meters in BI-MMF and need to be accounted for in fiber standards. <br /> <br /> A TIA TR 42.12 task force led by OFS has been assigned with understanding these new fiber types. Its work includes clarifying the numerical aperture, core diameter, and bandwidth of BI-MMFs. Work to date has shown that fiber profiles vary significantly from manufacturer to manufacturer, unlike standard multimode fiber. These profiles are all very different from the embedded base of standard multimode fiber. The interaction of all these different designs is not well understood at this time and is being evaluated by the task force. <br /> <br /> All this uncertainty makes it likely that many bend-insensitive multimode fibers being installed today may have bandwidth and connection losses that vary significantly from the standard multimode fibers in the embedded base. Until the fundamental transmission properties can be agreed upon and the interaction of these fibers with the embedded base is well understood, we at OFS believe it would be wise to hold off on deploying this new class of optical fibers.</p> <p><strong>David Mazzarese</strong> is fiber systems engineering manager at <a href="" target="_blank">OFS</a>.</p>http://localhost:4503/content/lw/en/blogs/lightwave-guest-blog/2011/05/whats-happening-with-bend-insensitive-multimode-fiber.html2011-05-05T19:04:00.000Z2013-02-26T22:33:25.822ZOFC/NFOEC Reporter's Notebook, Day 2noemail@noemail.orgStephen Hardy<p>Yes, we all decided to show up at the Convention Center in LA on Wednesday despite what's going on in the stock market. (For more on that, <a href="/business/news/How-long-will-the-optical-communications-correction-last-117718058.html">see the story I just posted</a>.)</p> <p>Things besides the stock market that were seen or discussed on the show floor:</p> <ul> <li>Ixia has stockpiled a nice supply of cash and is going shopping, a source at the company told me. Any ideas?</li> <li>Speaking of test companies, Optametra has added 3D constellation analysis to its optical modulation analyzer. Yes, I'm talking put on the glasses and watch that Poincaire sphere bulge in your direction kind of 3D. And, yes, they report they're getting phone calls from field equipment manufacturers wanting to talk about how carriers are going to test 100G deployments.</li> <li>DWDM supplier Optelian is using modules from someone other than JDSU for its newly announced use of tunable XFPs. I got the impression that they're not coming from Emcore, either, despite the fact that the two companies have worked together in the past. Their other options would include at least Finisar, Fujitsu, Sumitomo, and Oclaro, based on what's on display in the exhibit area.</li> <li>Avago is talking up other uses for the optical modules it has developed for LightPeak/Thunderbolt. Active optical cables for USB 2.0 and 3.0 applications, as well as HDMI cables, are some potential products in consumer applications for the device. Despite the fact that Apple is using an electrical version of the technology, Avago remains confident that we'll see the optical version of Thunderbolt employed in the near future.</li> <li>Nokia Siemens Networks plans to add colorless and directionless ROADM capabilities to the hiT 7300 and 7500 via a new release slated for release this month. Meanwhile, 100G capabilities will be ready for customer trials in the June timeframe. And, yes, they're still working with Cisco/CoreOptics.</li> <li>Infinera has its eyes on conventional ROADM capabilities, as well as MPLS-TP features for its DTN platforms. The ROADM function would be useful for ingress/egress applications.</li> <li>Remember MEMS-based optical switches? Calient hopes you'll be less likely to picture big, bulky platforms with a rat's nest of cabling once it debuts its MEMS-based subsystems. The company is targeting colorless/directionless ROADM and data center applications. But it still expects to find use for its 1080x1080 matrix capabilties.</li> <li>[UPDATED] Finisar has a demonstration of a 100GBase-LR4 CFP in its booth. The module leverages DFB&nbsp; lasers and should be sampling the latter part of this year and in full production in the first half of next year. So reasonably priced 4x25G 100G module will be available sooner than &quot;some people&quot; -- that means you, Google -- think, the company's Rafik Ward told me.</li> <li>NeoPhotonics also is looking at CFPs, as well as QSFPs. The company expects to have a 4x10G CFP by the middle of this year, with a 4x25G version on the roadmap.</li> <li>Oclaro is very close to having its coherent 40G module ready. The holdup is the electronics from ClariPhy, which should be available &quot;very soon,&quot; according to Terry Unter, who oversaw work on the module (as well as everything else) at Mintera before Oclaro purchased the company. Despite the delay, the relationship between Oclaro and ClariPhy is still good, Unter emphasized.</li> </ul> <p>Catch up on the first day of exhibits with the <a href="/blog/OFCNFOEC-Reporters-Notebook-Day-1.html">Reporter's Notebook, Day 1</a>.</p>http://localhost:4503/content/lw/en/blogs/lightwave-guest-blog/2010/03/ofcnfoec-reporters-notebook-day-2.html2011-03-10T04:00:00.000Z2013-02-26T22:36:18.037ZNetworking towards the cloudnoemail@noemail.orgRoy Rubenstein<p>The quest to improve the use of IT resources is proving an arduous and complex journey. <br /> <br /> For end users, the advent of dynamic data centers will be felt as an improved experience when using applications. For data center operators the impact will be far greater; computing costs will come down and workload automation will be possible, a prerequisite for the widespread adoption of cloud computing. <br /> <br /> The first step along the journey occurred when enterprises started concentrating their scattered, underutilized servers within data centers. Servers were replaced with standardized hardware to reduce the server types IT staff needed to maintain, to reduce operational costs further. But the biggest change -- causing an upheaval in the data center -- is the adoption of virtualization techniques. <br /> <br /> Virtualization enables servers and storage to be split into multiple logical versions, raising hardware utilization from 10% typically to as high as 70% to 80%. Such efficiency improvements are massively scaled given data centers can host tens of thousands of servers. Market research firm Gartner predicts that over half of all workloads will be &ldquo;virtualized&rdquo; by 2012. <br /> <br /> Virtualization is also transforming data center networking, spurring standards developments to meet new requirements. These include the IEEE&rsquo;s Data Center Bridging, the IETF&rsquo;s Transparent Interconnection of Lots of Links (TRILL), and the IEEE&rsquo;s Edge Virtual Bridging and Bridge Port Extension.<br /> <br /> With virtualization, applications are no longer confined to single machines but are shared across multiple servers for scaling. This is changing the traffic flow within data centers. Until now the predominant traffic has been &ldquo;north-south&rdquo; across the three-switch-layer hierarchy of switches -- access (top-of-rack), distribution and core commonly used. With virtualization, the predominant traffic is now &ldquo;east-west,&rdquo; across the same tiered equipment. <br /> <br /> The need to scale the switching while simplifying the management is also leading to new single logical layer architectures that will scale to tens of thousands of 10-Gigabit Ethernet ports. First examples of such single logical layer architectures include Juniper Networks&rsquo; Stratus and Brocade&rsquo;s virtual cluster switching.<br /> <br /> <strong>Networking standards</strong><br /> Data Center Bridging (DCB) supports the lossless requirements of storage traffic and the low-latency associated with InfiniBand. DCB promises a single consolidated network within the data center, and is being introduced as data center staffs adopt 10-Gigabit Ethernet ports.<br /> <br /> TRILL is an important complement to DCB that enables far larger Layer 2 networks. TRILL-based networks linking switches across the data center will work without needing to turn off precious bandwidth to avoid the formation of loops. This is one shortfall of the Spanning Tree protocol. <br /> <br /> The network must also cope with the switching of virtual machines between servers, across the data center, and between data centers, while also carrying information associated with each virtual machine for its correct configuration on the destination server. <br /> <br /> A server&rsquo;s software-based hypervisor that oversees the virtual machines comes with a virtual switch. But the industry consensus it that hardware rather than software executed on a server is best for switching. Two standards are in development to handle these virtualization requirements: 802.1Qbg Edge Virtual Bridging and 802.1Qbh Bridge Port Extension.<br /> <br /> The 802.1Qbg camp is backed by leading switch and network interface card vendors, while 802.1Qbh is based on Cisco Systems&rsquo; VN-Tag technology. Both standards will likely be embraced within the data center.<br /> <br /> <strong>Challenges</strong><br /> All these networking standards are nearing completion. Yet while the protocols will soon be deployed, the expectation is that it will be at least five years and more likely 10 before their full impact will be felt.<br /> <br /> Force10 Networks believes it will be a long and challenging transition. IBM points out how enterprises are used to working in IT silos, selecting subsystems independently, and that new work practices across divisions will be needed if the networking challenges are to be addressed. Market researcher Yankee Group points out that a lot of the future value of these various developments will be based on enabling automation, a big IT hurdle in itself. <br /> <br /> &ldquo;We all realize it is complex,&rdquo; says one executive at a large service provider. &ldquo;Managing pooled resources is a learning curve for everyone.&rdquo;<br /> <br /> <em>Roy Rubenstein is the editor of </em><a href="" target="_blank"><em>Gazettabyte</em></a><br /> <br /> <br /> <br /> <br /> &nbsp;</p>http://localhost:4503/content/lw/en/blogs/lightwave-guest-blog/2010/12/networking-towards-the-cloud.html2010-12-06T19:28:00.000Z2013-02-26T22:34:24.973ZMultiPhy presents its casenoemail@noemail.orgStephen Hardy<p>There has been a lot of discussion and speculation surrounding the several startups targeting the 40- and 100-Gbps market. One of them, Israel's MultiPhy, has finally given us something concrete to talk about by populating its website at <a target="_blank" href="http://www/"></a>.</p> <p>According to the website, MultiPhy will concentrate on CMOS-based ICs for both coherent and direct detect applications. Tools in the MultiPhy kit include maximum likelihood sequence estimation expertise (the same technology CoreOptics leveraged to <a href="/business/news/Cisco-to-acquire-CoreOptics-94447114.html">attract Cisco's attention</a>) and what it termed &quot;one sample per symbol architectures.&quot;</p> <p>The upcoming devices include:</p> <ul> <li>the MP1040D and MP1100Q, soft-decision mixed-signal ADC-DSP-based devices designed to replace &ldquo;hard-decision&rdquo; de-mux/CDR in direct detection applications</li> <li>the MP2040C and MP2100C, mixed signal coherent transceiver chips that add coherent-detection-specific algorithms to some of the building blocks of the MP1040D and MP1100Q.</li> </ul> <p>There's a lot more detail on the site about the first two chips than the last two, which implies we'll probably see those devices on the market first. Roy Rubenstein of Gazettabyte, who visited the company recently and <a href="" target="_blank">wrote about the experience on his blog</a>, got the same impression from his conversations with MultiPhy CEO Avi Shabtai and Director of Product Management Ronen Weinberg.</p> <p>&nbsp;</p>http://localhost:4503/content/lw/en/blogs/lightwave-guest-blog/2010/11/multiphy-presents-its-case.html2010-11-29T16:22:00.000Z2013-02-26T22:34:56.817Z40 and 100 Gigabit Ethernet presents testing challengesnoemail@noemail.orgJeff Lapak and David Estes<p>The IEEE 802.3ba 40 and 100 Gigabit Ethernet standard presents some unique <a href="/test-and-measurement">testing</a> challenges for both product developers and users. First among these is the additional test equipment technicians require to perform some of the formerly routine evaluations of Ethernet equipment.<br /> <br /> The standard defines several operating speeds that rely on multiple lanes running at serial speeds of 10 to 25 Gbps. The upside of this methodology is that companies may be able to reuse test equipment purchased for measurement of physical layer properties, especially at 10 Gbps. However, new adaptors and specialized wavelength splitting tools may need to be purchased for laboratories whose focus previously centered on earlier flavors of Ethernet. This is due to the fact that several of the PHY types defined in 802.3ba rely on WDM on a single optical fiber, while many of the others rely on ribbon-style optical cables with multiple fiber-optic channels per cable. Characterization of the adaptors will be critical to obtaining accurate measurements of the underlying physical layer. <br /> <br /> In addition to these physical layer measurements, frame generation and bit-error ratio testing (BERT) also suffer from increased complexity in 40 and 100 Gigabit Ethernet. While test equipment for generating frames is already available from major manufacturers, the price tag is relatively high and may be out of reach for some testing labs. Many companies may end up relying on their own devices for frame generation, which may limit the coverage and testable bandwidth of their devices.<br /> <br /> Verifying bit error ratios at these higher speeds can also be problematic. While many switches and routers will be available that can fully use a link at these speeds, initially many end stations will likely not be able to support full line rates. In these scenarios, verification of the defined bit error ratios of 10-12 may not be practical in a reasonable timeframe. For example, testing 100GBase-SR10 requires sending over 2.3 billion frames. At line rate, this would only take five minutes -- but at slower speeds it could take significantly longer. <br /> <br /> Interoperability among companies is extremely important in the early stages of development and product release to ensure that link issues are not experienced by early adopters of the technology.<br /> <br /> One final note is that available test equipment does not provide 100 percent coverage for all parts of the 802.3ba standard. One example is the verification of the Physical Coding Sublayer. To have a high level of confidence that they are meeting the requirements of the standard, companies will be required to either perform detailed hardware simulation or test at third-party laboratories that specialize in testing lower layer technologies.<br /> <br /> These are some of the challenges the University of New Hampshire InterOperability Laboratory (UNH-IOL) will tackle within its 40 and 100 Gigabit Ethernet Consortium. Through a collaborative testing model that distributes the cost, the UNH-IOL will use its Ethernet testing capabilities to help consortium members prepare products for the IEEE 802.3ba standard. The consortium is currently accepting founding member companies who will have an early opportunity to provide input into the testing process that will enable market-ready products as the high speed Ethernet standards evolve. The fee for participation in the 40 and 100 Gigabit Ethernet Consortium is $24,000. More information on the 40 and 100 Gigabit Ethernet Consortium <a href="" target="_blank">is available on the UNH-IOL website.</a></p> <p><img height="75" align="baseline" width="50" class="enlargeable" title="Click to Enlarge" original="/content/dam/etc/medialib/new-lib/lw/blogs/2010/11/lw_jeff_lapak_cropped.jpg" src="/content/dam/etc/medialib/new-lib/lw/blogs/2010/11/lw_jeff_lapak_cropped.jpg/_jcr_content/renditions/pennwell.web.50.75.jpg" alt="Jeff Lapak of UNH-IOL" /> <strong>Jeff Lapak</strong> is a senior engineer at the University of New Hampshire InterOperability Laboratory.</p> <p><strong> <img height="79" align="baseline" width="50" class="enlargeable" title="Click to Enlarge" original="/content/dam/etc/medialib/new-lib/lw/blogs/2010/11/lw_estes_cropped.JPG" src="/content/dam/etc/medialib/new-lib/lw/blogs/2010/11/lw_estes_cropped.JPG/_jcr_content/renditions/pennwell.web.50.79.JPG" alt="David Estes, UNH-IOL" /> David Estes</strong> is Ethernet research and development engineer at the University of New Hampshire InterOperability Laboratory.</p>http://localhost:4503/content/lw/en/blogs/lightwave-guest-blog/2010/11/40-and-100-gigabit-ethernet-presents-testing-challenges.html2010-11-22T22:20:00.000Z2013-02-26T22:34:41.257ZWhen does coherent vs direct detection make sense for 40G/100G network deployments?noemail@noemail.orgRoss Saunders<p>Without a doubt, the networking performance advantages of coherent technology are considerable. With coherent detection, the phase information of the optical signal is preserved after electro-optic detection, allowing the optical distortion effects such as chromatic and polarization mode dispersion (PMD) to be compensated electronically. This is even more elegant in digital coherent detection schemes (the preferred embodiment in the optical industry), as the adaptive equalizer consists of a digital signal processor (DSP), typically implemented in CMOS ASIC technology, which is low cost to produce in volume.<br /> <br /> However, as is typical in this industry, a &ldquo;one solution fits all&rdquo; idea that coherent technology wins in all applications is incorrect, nor is it likely to be in the near future.<br /> <br /> Fundamentally, coherent optical systems require much more complex electro-optics than direct-detection schemes. Typical 40G/100G coherent systems use the polarization-multiplexed <a href="/equipment-design/featured-articles/is-dpndashqpsk-the-endgame-for-100-gbitssec-54890687.html">quadrature phase-shift keying</a> (PM-QPSK) modulation scheme. This approach requires:</p> <ul> <li>two lasers</li> <li>dual polarization nested Mach-Zehnder modulators (basically, four modulators)</li> <li>four driver amplifiers</li> <li>four balanced photodiodes</li> <li>some optical passives for polarization beam combining/splitting and phase diversity.</li> </ul> <p><br /> Compare this to <a href="/equipment-design/transport/featured-articles/Engineering-DPSK-spectral-properties-enables-superior-performance-through-multiple-cascaded-optical-wavelength-selective-switches-68645527.html">differential phase-shift keying</a> (DPSK), which requires a single laser/driver amp/modulator/photodiode and delay interferometer plus a tunable dispersion compensator (TDC).<br /> <br /> The increased complexity of coherent schemes simply translates into increased cost. One smart thing the industry is doing at 100G is to standardize the modulation scheme and integrated photonics in the OIF. This certainly helps the cost structure but, at least in the early years, not enough to offset the more complex transmit/receive design for coherent.<br /> <br /> In some network applications, using coherent detection will still make sense, even though the transponder cost is higher. For example, the PMD tolerance of direct-detection schemes using 40G DPSK is around 3 ps mean, or around 8 ps if used with a PMD compensator or for a RZ-DQPSK modulation format. PMD beyond these levels can easily be satisfied using coherent detection. In addition, assuming a clean design with low implementation penalty, coherent detection should offer a 2- to 3-dB OSNR improvement, enabling greater distance for trans-oceanic submarine or terrestrial ultra long haul (ULH) applications.<br /> <br /> Another application where coherent technology has an advantage is for low-latency connectivity. The use of coherent detection can completely eliminate the need for optical fiber based dispersion compensation, which reduces the distance, and hence latency, of the optical link. This has some advantages in the financial community and for gaming applications. <br /> <br /> The bottom line, though, is that while there are network applications where coherent&rsquo;s transponder cost premium can be justified by the reduction in OEO regenerators required at the network level, there are others where it can&rsquo;t. The marketing and performance advantages of coherent detection make for an easy sell -- but economic reality means that there will be many metro, regional, long haul, and even submarine links where 40G DPSK or DQPSK will offer the best price/performance tradeoff.<br /> <br /> Direct detection has dominated 40G deployments to date, with strong demand forecasts in 2011, 2012, and 2013. Coherent 40G technology will begin deployments in especially challenging applications such as very high PMD older fibers or trans-oceanic submarine. But wide-scale coherent technology is not likely to happen until 100G matures, where the performance advantages of coherent really become a &ldquo;must-have&rdquo; in the majority of applications. <br /> <br /> For 100G coherent, the use of OIF integrated photonics is also expected to provide a competitive $/bit/s cost structure at a fairly early stage in the technology life cycle. Even after 100G availability, 40G direct detection is likely to survive as a dominant technology in metro/regional networks and smaller national networks where 100G pipes are still too big to fill efficiently.<br /> <em><br /> -- Ross Saunders is general manager, next gen technology, at <a target="_blank" href="">Opnext</a></em><br /> &nbsp;</p>http://localhost:4503/content/lw/en/blogs/lightwave-guest-blog/2010/09/when-does-coherent-vs-direct-detection-make-sense-for-40g100g-network-deployments.html2010-09-19T15:16:00.000Z2013-02-26T22:35:17.178ZThe price of propping up an industrynoemail@noemail.orgRoy Rubenstein<p>Skype&rsquo;s planned initial public offering doesn&rsquo;t directly call to mind the travails of the optical component industry. But the hope of raising $100 million by an over-the-top telephony company does make one wonder about the value of optical component vendors, the enablers of the underlying infrastructure.&nbsp;</p> <p>Ovum analyst Ron Kline recently noted how hard life is for optical component players, especially wavelength-selective switch (WSS) players that make the core building blocks used for <a href="/about-us/lightwave-issue-archives/issue/ensuring-profitability-with-a-3g-roadm-system-53428682.html">ROADMs</a>. The likes of Finisar, JDS Uniphase, and Oclaro are in a tough spot, said Kline, in that they have to invest heavily in R&amp;D for carriers and system vendors that &ldquo;ask for the world.&rdquo;&nbsp;</p> <p>He was referring to the vendors having to add variable channel widths to their WSSs to accommodated future line speeds above 100 Gbps. Such speeds will require channels wider than the 50-GHz channels used now. Having variable-width channels will allow operators to maximize capacity by efficiently packing lightpaths whatever their width and to avoid wasting precious fiber spectrum.&nbsp;</p> <p>WSS makers are thus developing fine pass-band filters that when combined in integer increments form adaptive channel widths. According to Finisar, such a &ldquo;gridless&rdquo; scheme has gained much operator attention over the last six months. Yet gridless will only be needed in several years&rsquo; time. After all, so far only a handful of 100-Gbps wavelengths are deployed.&nbsp;</p> <p>The operators&rsquo; and system vendors&rsquo; wish list does not stop there. WSS vendors such as JDS Uniphase <a href="/blog/OFCNFOEC-2010-Reporters-Notebook-Day-3.html">are developing 1x23-degree WSSs</a> even though most eight-port ROADMs (served using 1x9-degree WSSs) are more than adequate for now.</p> <p>Contentionless -- non-blocking -- ROADMs that drop multiple versions of the same wavelength carried by different fibers are also under development.</p> <p>And after gridless and contentionless, operators want faster switching speeds to reduce overall network latency, says Oclaro. Requests for proposals are enquiring about switching speeds under 100 ms, whereas until recently 2 s was the norm.</p> <h3><strong>So what&rsquo;s new?</strong></h3> <p>But is the fact that optical component vendors must invest heavily in R&amp;D an issue? And hasn&rsquo;t this always been the case?</p> <p>One could even argue the need for such investment is beneficial. At least firms have scope to differentiate their WSS products, <a href="">a task far harder for optical transceiver makers</a>. And as component vendors add more core networking features, they gain system know-how. Integrating such system elements, component vendors gain value and revenue as they become subsystem vendors.</p> <p>But the concerns are whether optical component vendors have deep enough pockets -- and whether they are adequately rewarded -- for their R&amp;D endeavors. This is an issue only if the financial health of optical component players is such that the R&amp;D needed to keep driving down the cost of transporting traffic becomes at risk. So far there are no signs of such a development</p> <p>Optical component firms aren&rsquo;t helped by being so remote -- almost the outer solar system -- from the sunny service layer that basks in attractive gross margins.</p> <p>It is unrealistic to expect that a successful Skype flotation -- and all in the industry should hope it is -- will benefit directly the photonic-layer enablers. But it is worth considering how much economic value rides on the work of a select few.</p> <p><em>Roy Rubenstein is the editor of the blog <a href="">Gazettabyte</a></em></p> <p>&nbsp;</p>http://localhost:4503/content/lw/en/blogs/lightwave-guest-blog/2010/08/the-price-of-propping-up-an-industry.html2010-08-26T13:33:00.000Z2013-02-26T22:35:33.136ZASICs and digital signal processing heat up the optical marketplacenoemail@noemail.orgRoy Rubenstein<p>The flurry of recent 100-gigabit announcements highlights how companies are lining up in the race to the next optical transmission speed hike. A handful of 100-Gbps wavelengths may have been deployed but the starting pistol is the IEEE&rsquo;s 40 and 100 Gigabit Ethernet standards whose ratification is imminent.<br /> <br /> The advent of 100-Gbps coherent transmission also signals a more fundamental change in the industry. Electronics now plays a central role in optical networking and this has important consequences for firms at the various manufacturing layers of the industry. Acquiring such expertise is already showing signs of market overheating &ndash; albeit, compared to a decade ago, &ldquo;localized hotspot&rdquo; is more apt.<br /> <br /> <a href="/business/news/Cisco-to-acquire-CoreOptics-94447114.html">Cisco&rsquo;s acquisition of CoreOptics</a> is the most noteworthy of recent announcements. Here is a return of in-house component expertise to a system vendor. <br /> <br /> Does the move reflect a more general trend of system vendors embracing in-house components after all the divestments of a decade ago? It seems unlikely. ACG Research argues the acquisition is a move by a router vendor to address packet optical transport. Cisco may also have decided it needs the technology in-house to move quickly to win business with important operators that want coherent technology.<br /> <br /> Cisco&rsquo;s acquisition is also noteworthy for other reasons. Cisco has a proven track record of successful acquisitions, and with CoreOptics it removes a leading coherent player from the open market -- a player that had been working with other optical vendors.<br /> <br /> In other announcements, Infinera has ditched its 40-Gbps photonic integration circuit (PIC), <a href="/networking/news/Infinera-lays-out-40G100G-coherent-roadmap-94553759.html">turning to 100-Gbps PICs and coherent technology instead</a> as it seeks to deliver a telling return for its technology. And Alcatel-Lucent has become the second vendor after Ciena/ Nortel <a href="/networking/news/Alcatel-Lucent-unveils-100G-on-1830-Photonic-Service-Switch-95980844.html">to offer a commercially available 100-Gbps system</a>.<br /> <br /> Module vendors Oclaro and Opnext have also been busy. <a href="/business/news/Oclaro-allies-with-ClariPhy-for-100G-coherent-94918514.html">Oclaro has partnered with IC specialist ClariPhy Communications</a> to develop and promote coherent technology while Opnext&rsquo;s in-house 100-Gbps technology is part of operator trials.<br /> <br /> These announcements reflect how squeezing more capacity out of long-reach fiber is becoming harder. More advanced modulation schemes are required, as are clever algorithms to cope with channel distortions. The advent of 100 Gbps also means vendors now prize coherent technology expertise, and in particular the ability to develop a coherent receiver ASIC. Such an ASIC comprises very high speed analog-to-digital (A/D) converters and a digital signal processor. It can even include advanced forward error correction, all on the one chip.<br /> <br /> The good news is that electronics is up to the challenge. Moore&rsquo;s Law, with its several decades of exponential growth, is coming to optics&rsquo; aid. A 45-nm CMOS process ASIC is fast enough to process the data generated by A/D convertors operating at a mind-boggling 64 gigasamples/s.<br /> <br /> And silicon will only become more important as systems move beyond 100 Gbps and employ yet more sophisticated modulation schemes. Systems will also become smarter, even changing the transmission schemes and rates used between end points depending on channel conditions.<br /> <br /> It is just this coherent ASIC that is causing market overheating. <br /> <br /> The development cost for the 100-Gbps ASIC is between $15 million and $20 million. System developers known to be developing their own ASICs include Ciena, Alcatel-Lucent, Infinera, Huawei, and now Cisco. Others developing silicon include Opnext, ClariPhy, Mitsubishi (which is supplying Japanese vendors that in turn are supplying NTT), Multiphy, and at least two other merchant chip companies. Then there are questions regarding the plans of startup Acacia Networks, Fujitsu Microelectronics, and Semtech&rsquo;s Sierra Monolithics.<br /> <br /> One leading FPGA company has said there are at least six companies in Asia Pacific developing 100-Gbps coherent silicon. Add to that the question as to what JDS Uniphase, Mintera and Finisar plan to do? That gives between 12 and 24 ASIC developers.<br /> <br /> Even taking the most circumspect case -- a dozen players each spending $15 million -- $180 million is being spent on developing coherent ASICs. It will take a long time and many deployed 100-Gbps wavelengths -- especially given the pricing pressures associated with optical networking -- before the industry gets its return on investment.<br /> <br /> Equally, though, can leading optical system vendors and transponder makers afford to let others own such core optical transmission technology?<br /> <br /> <em>Roy Rubenstein is the editor of the blog </em><a href="" target="_blank"><em>Gazettabyte</em></a><br /> &nbsp;</p>http://localhost:4503/content/lw/en/blogs/lightwave-guest-blog/2010/06/asics-and-digital-signal-processing-heat-up-the-optical-marketplace.html2010-06-15T15:23:00.000Z2013-02-26T22:35:50.117ZGuest blog: As 4G catches fire, fiber followsnoemail@noemail.orgTom Huegerich<p>While the deployment of 4G/LTE has been relatively limited in the United States, 4G LTE is taking off across other regions in the world. And equipment providers and carriers are scrambling to provide resources that will reshape network infrastructure for the sea change taking place.<br /> <br /> European telecom giant TeliaSonera launched the world&rsquo;s first 4G network in December 2009 in Stockholm, Sweden and Oslo, Norway. Though the network was small in size, it demonstrated that 80-Mbps speeds could be attained, generating worldwide excitement over 4G. <br /> <br /> Soon though, the TeliaSonera network could be a footnote. China Mobile, which boasts a subscriber base larger than the entire population of the U.S., announced plans to have a 4G network up and running by May 2010. At the end of last year, the carrier had a subscriber base of over 500 million mobile customers. With the amount of fiber and other materials required for this massive shift, China Mobile&rsquo;s bold plan is as much a challenge for equipment providers as it is for the company itself.<br /> <br /> As TeliaSonera and China Mobile lead the world into 4G, it will usher in an era of change. Studies show that most cell sites are supported by copper, with fiber-fed sites accounting for only 25 percent of wireless backhaul globally. As 4G takes hold, however, increased investment in fiber infrastructure will be needed to meet the pending wireless demands of these next-generation networks. <br /> <br /> <strong>Next-gen networks: Beyond radios and handsets</strong><br /> <br /> Next-generation wireless networks need more cell sites and smaller coverage areas to accommodate higher data rates. This will require a substantial investment in the wired portion of the network, often called &ldquo;fiber backhaul&rdquo; or the &ldquo;wireless access infrastructure.&rdquo; Service providers hope to stay ahead of the exploding demand by running fiber from the wired network to cell sites and antennas atop thousands of cell towers coast-to-coast. (For more on this trend, <a href="/networking/video/51932282.html?player=32673423001&amp;title=53029694001 ">see the interview</a> with ADC CTO Dr. Michael Day on the Lightwave Channel.)<br /> <br /> Eventually, increased data rates and &ldquo;always on&rdquo; service will require many more micro cell sites&mdash;small, remote transceivers that will be needed to support the bandwidth requirements of 4G. This trend will drive the use of fiber deep into the wireless access infrastructure.<br /> <br /> <strong>What markets could be next</strong><br /> <br /> Emerging markets, such as India and sub-Saharan Africa, are seeing an explosion in mobile phone subscribers. As subscribers grow, 4G expansion in these regions is certain to follow.<br /> <br /> According to, in 2008, India and sub-Saharan Africa accounted for 32 percent of all new wireless subscribers. also reported that from 2009 through 2012, customers in India and sub-Saharan Africa are expected to account for 44 percent of new subscribers worldwide. When these massive, relatively untapped markets make the transition to 4G, fiber will compose the backbone of the access infrastructure for these networks.<br /> <br /> Despite wireless carriers&rsquo; focus in the U.S. on delivering 3G connectivity, the ever-increasing need to upgrade many cell sites to fiber is now, so they can deliver on the promise of LTE and 4G. Longer term, this is the only hope for the U.S. to be competitive in this global economy. <br /> <br /> -- <em>Tom Huegerich is vice president, global fiber engineering, at <a href="" target="_blank">ADC Telecommunications</a>. He has more than 25 years of experience in fiber connectivity and the telecommunications industry.</em><br /> <br /> <br /> <br /> &nbsp;</p>http://localhost:4503/content/lw/en/blogs/lightwave-guest-blog/2010/03/as-4g-catches-fire-fiber-follows.html2010-03-30T18:56:00.000Z2013-02-26T22:36:06.789ZGuest Blog: Why optical transceiver vendors are like discus-throwersnoemail@noemail.orgRoy Rubenstein<p>Optical transceiver vendors collectively have had a good quarter, with many reporting revenues up. The optical transceiver market is also set for a better year, growing to $2.2 billion, 5 percent up compared to 2009, <a href="/equipment-design/transmission/news/LightCounting-Optical-transceiver-sales-to-exceed-2B-in-2009-63524382.html">according to LightCounting</a>. <br /> <br /> Yet market conditions will remain tough. Transceiver vendors are challenged in how to differentiate their optical transceiver designs -- and revenues -- given the products conform to common form factors. <br /> <br /> To understand the importance of transceiver differentiation, it is worth reviewing the purpose of multi-source agreement (MSA) transceiver form factors. <br /> <br /> Common form factors arose so that optical equipment makers could avoid developing their own interfaces or being locked into a supplier&rsquo;s proprietary design. <br /> <br /> Judged in those terms, MSAs have been a roaring success. <br /> <br /> Equipment makers can now buy optical interfaces from several sources, all battling for the design win. MSAs have also triggered a near-decade of innovation, resulting in form factors from the 300-pin large form factor transponder MSA to the pluggable SFP+, less than a 60th its size.<br /> <br /> But MSAs, with their dictated size and electrical interfaces, are earmarked for specific sectors. As such the protocols, line rates, and distances they support are largely predefined. Little scope, then, for differentiation.<br /> <br /> Yet vendors have developed ways to stand out.<br /> <br /> One approach is to be a founding member of an MSA. This gives the inner circle of vendors a time-to-market advantage in securing customers for emerging standards. The CFP MSA for 40- and 100-Gigabit Ethernet is one such example.<br /> <br /> Some designs required specialist optical components that only a few vendors have, such as high-speed VCSELs used for the latest Fibre Channel interfaces. In turn, many vendors don&rsquo;t have the resources -- designs teams and the deep pockets -- needed to develop advanced technologies, such as those for 40- and 100-Gbps transponders, whether it is integrated optical devices or integrated circuits. <br /> <br /> Being the first to integrate existing designs into smaller form factors is another way to differentiate oneself. An example is JDSU, which has <a href="/equipment-design/products/JDSU-advances-tunable-strategy-with-new-component-building-blocks-60090712.html">integrated a tunable laser into the pluggable XFP</a> MSA. But like all good ideas, others follow: At least three vendors are expected to sell tunable XFPs this year.<br /> <br /> Menara Networks is using its IC design and software know-how <a href="/about-us/lightwave-issue-archives/issue/menara-networks-debuts-aotn-in-a-transceivera-54885797.html">to encapsulate line card functionality within a pluggable</a>. Its XFP has an application-specific IC that supports the Optical Transport Network (OTN) encapsulation standard. The advantage to system vendors? They can design a universal line card without needing to support OTN, using the pluggable only when required.<br /> <br /> Transceiver vendors are also differentiating their products through marketing approaches. New-entrant Far Eastern vendors are selling transceivers directly to service providers and data center operators, bypassing equipment makers. <br /> <br /> They are also looking to differentiate on price, cutting costs where they can (including R&amp;D) and focusing on bread-and-butter designs. They are quite happy to leave the leading vendors to make the heavy investments and battle each other in the emerging 40- and 100-Gbps markets. <br /> <br /> Industry analysts take a pragmatic view: Differentiation doesn&rsquo;t matter so much for optical transceivers since even if a vendor gets a lead, others inevitable will follow. And anyway, the cost of transporting traffic is still too high even with the fierce competition instigated by MSAs. In turn, optical transceivers are now a permanent industry fixture; they can&rsquo;t be conjured to disappear.<br /> <br /> For optical transceiver vendors, however, the result is a market that is brutal. <br /> <br /> So can optical transceiver vendors differentiate their products? Of course they can. But like discus-throwers, while standout performances are to be expected, their room to maneuver will remain limited.<br /> <br /> <em>Roy Rubenstein is the editor of the blog <a target="_blank" href="">Gazettabyte</a></em></p>http://localhost:4503/content/lw/en/blogs/lightwave-guest-blog/2010/01/guest-blog-why-optical-transceiver-vendors-are-like-discus-throwers.html2010-01-21T14:43:00.000Z2013-02-26T22:36:37.041ZGuest blog: Is photonic integration market disruptive?noemail@noemail.orgRoy Rubenstein<p><em>Guest blogger Roy Rubenstein, editor of <a target="_blank" href="">gazettabyte</a>, offers his take on photonic integration.</em><br /> <br /> Nortel and Infinera have markedly different views regarding photonic integration. And if their company fortunes also differ, there is no doubting both firms&rsquo; optical engineering expertise.<br /> <br /> Nortel was first-to-market with <a href="/general/nortel-claims-40g-market-leadership-54895292.html">its 40-Gbps dual-polarization quadrature phase-shift keying (DP-QPSK) system</a>. Kim Roberts, Nortel's director of optics research and one of the engineers that developed the system, acknowledges photonic integration&rsquo;s role in reducing system cost and size but downplays its importance overall. Useful but not revolutionary, he says. <br /> <br /> Infinera&rsquo;s chief strategy officer, Dave Welch, thinks differently, arguing that the photonic integrated circuit (PIC) is optical networking&rsquo;s current disruption. Longer-term, its impact on the industry&rsquo;s supply chain could be as disruptive as the digital camera&rsquo;s CMOS image sensor &ndash; also a PIC &ndash; has been on the photography industry, he says.<br /> <br /> So who is right? And has photonic integration been overhyped in a hype-starved industry? <br /> <br /> Looking more carefully, the two companies may have different takes on photonic integration yet both are on the same page regarding a broader form of integration, that of photonics and electronics. <br /> <br /> Nortel&rsquo;s approach is to push CMOS technology to the extreme to address high-speed transmission impairments at 40 and 100 Gbps. In particular it is using digital signal processing to extend optical transmission systems. This simplifies end-to-end optics even if the resulting DP-QPSK transmit and receive optics is more complex and clearly benefits from photonic integration. <br /> <br /> Infinera uses its PICs to ease conversion between the optical and electrical domains. And by designing its system around its PIC, it can trade off performance between the two domains.<br /> <br /> Both firms also have impressive high-speed transmission roadmaps. Nortel has discussed up to 1 terabit per wavelength speeds while Infinera has shown 2- and even 4-terabit PICs. To date, <a href="/about-us/lightwave-issue-archives/issue/photonic-integration-diverges-down-two-paths-54889947.html">Infinera has detailed a 10x40-Gbps DP-DQPSK PIC</a>.<br /> <br /> Photonic integration is thus not so much overhyped as still in its infancy. <br /> <br /> Surveying the landscape, photonic integration&rsquo;s impact is limited but then so has been its application. <br /> <br /> For high-speed transmission, it is now being used to simplify the design complexity of phase/phase-polarization modulation schemes: DPSK, DQPSK, DP-DQPSK, and DP-QPSK. <br /> <br /> For PON, hybrid and monolithic integration are being pursued by <a href="/fttx/products/enablence-debuts-onts-based-on-plc-platform-54891362.html">Enablence Technologies</a> and <a href="/fttx/products/OneChip-debuts-EPON-transceiver-line-for-FTTH-57655937.html">OneChip Photonics</a>, respectively, solely to reduce cost. Yet market-leading PON transceiver makers still favor using TO-can discretes and manual labor.<br /> <br /> Wavelength-selective-switches have moved away from optical waveguide technology, using free-space optics instead. Here the integration story is at the packaging level. And while JDSU&rsquo;s XFP-based tunable laser shows how monolithic integration can achieve a milestone form-factor shrink, the compactness is at the expense of optical reach.<br /> <br /> Accordingly, all the while the industry is led by conservative service providers, confining system vendors and component players to the constraints of existing DWDM networks, photonic integration&rsquo;s full potential will be curtailed. <br /> <br /> But that could change. At the OIDA photonic integration event held in September, Google presented a talk entitled <em>Life beyond 100 Gbps: Why Photonic Integration is a Must.</em> <br /> <br /> Today&rsquo;s PICs could yet be seen as the equivalent of the first digital cameras. If so, we are still years away from PICs being deployed widely in new architecture-changing platforms, the telecom equivalent of cameras on handsets and even TVs.<br /> <br /> <strong>Roy Rubenstein</strong><br /> For his gazettabyte article on photonic integration, <a target="_blank" href="">click here</a>.</p>http://localhost:4503/content/lw/en/blogs/lightwave-guest-blog/2009/11/guest-blog-is-photonic-integration-market-disruptive.html2009-11-09T20:09:00.000Z2013-02-26T22:33:59.442Z 500

Cannot serve request to /content/lw/en/blogs/lightwave-guest-blog/_jcr_content.feed on this server

ApacheSling/2.2 (Day-Servlet-Engine/4.1.52, Java HotSpot(TM) 64-Bit Server VM 1.7.0_79, Windows Server 2012 6.2 amd64)