Users, vendors weigh Ethernet vs. Fibre Channel for SANs
With Ethernet running rampant in enterprise and carrier networks, vendors and network managers have begun to consider the protocol’s role in SAN environments. While Fibre Channel technology remains the de facto gigabit-speed standard for the largest enterprise SANs, the emergence of Carrier Ethernet networks in metropolitan areas has, for a certain class of data center, increased the viability of using Ethernet to deploy IP-based iSCSI.
According to Todd Bundy, director of business development, storage, for ADVA Optical Networking (Munich, Germany), iSCSI technology is often used most effectively as a low-cost way for enterprises to connect remotely located servers and storage devices. Bundy says that iSCSI allows such enterprises to leverage an existing LAN and WAN that are already supporting IP, to feed such devices into a core SAN based on Fibre Channel technology. That’s why vendors such as Cisco have added iSCSI interfaces alongside Fibre Channel ports on their director-class switches. “The reason they’re doing that is to connect to those remote locations, where there might be just a smattering of servers and storage, using existing infrastructure,” Bundy says.
Stephanie Balaouras, a senior analyst at Forrester Research (Cambridge, MA), sees the most rapid adoption of iSCSI technology for SAN applications occurring among small-to-medium-sized enterprises. “Typically for them, if they haven’t invested in Fibre Channel to date, there’s enough IP-based alternatives out there for them that are a whole lot less expensive and don’t require additional skills to deploy,” remarks Balaouras.
Michael Karp, a senior analyst at Enterprise Management Associates (Boulder, CO), concurs with this assessment. “It has always made a lot of sense for people that don’t have a Fibre Channel background to put iSCSI in, because Fibre Channel requires a lot of very specific knowledge, whereas with iSCSI, there’s a lot of generalized knowledge that people already have,” reasons Karp. “It’s built on IP, and it’s built on SCSI-nobody has ever loved either protocol, but it’s the devil we’ve been living with for over twenty years now.”
ADVA’s Bundy views the criteria for organizations deciding where and when to deploy one SAN technology over the other as boiling down to a prioritization of cost, performance, and reliability factors. For non-mission-critical applications such as exchange servers or Windows servers, Bundy says that iSCSI presents a highly feasible option, particularly as cost is often the primary concern in such scenarios, ahead of reliability and performance. However, in the case of large data centers running “heavy-duty databases,” Bundy notes that system reliability is most often held as the primary concern, followed by performance and cost factors. This prioritization would favor Fibre Channel.
Karp contends that IP-based storage has historically made enterprises “a little bit nervous” due to difficulties traditionally associated with making the technology secure. However, he maintains that, in recent years, vendors have quietly started to upgrade the capability of IP-based SAN applications, spurring an increasing number of Fibre Channel environments to experiment with incorporating iSCSI-based SANs. “I’m seeing the two technologies lying right alongside one another in the same computer room,” he claims.
In light of such optimistic assessments, is it feasible to envision a world where IP-based SAN applications such as iSCSI might challenge the primacy of Fibre Channel technology, especially given the emergence of 10-Gbit Carrier Ethernet technologies?
“You really can’t look at it from that perspective,” counters Bundy. “You’ve got to look at it from the perspective of the application you’re trying to run, and how well it runs over that protocol.”
He continues, “If you take just a mainframe environment, it doesn’t support IP. You can’t do mainframe clustering over IP or iSCSI, you can’t do mainframe clustering over SONET. You can only do it over WDM [via Fibre Channel], because it delivers the performance and the low latency that you need to keep these mainframes and servers perfectly timed with one another, so they’re never out of synch. The purpose, of course, is if I’ve got three or four data centers, and I want to cluster all my mainframes, I now can balance my workloads across the various data centers.”
Forrester’s Balaouras sees the market for Fibre Channel switches with 54 ports or fewer as having reached a plateau, with most growth instead occurring in large-scale, core director switches. Of the many enterprises she’s polled, Balaouras says that most cite storage consolidation enabled by such switches as ranking among their highest priorities. “If they’ve built a lot of SAN islands with smaller switches, or if they’ve built a SAN infrastructure with multiple, small switches, they’re really looking to consolidate all that with a really reliable, high-port-count director,” she says.
Bundy notes that, since the terrorist attacks of Sept. 11, 2001, many “power users” of enterprise SANs, such as financial customers in the New York metropolitan area, have implemented WDM technology to connect geographically far-flung data centers for purposes of data replication and critical recovery. Traditionally, says Bundy, the ideal for such institutions has been to move data replication centers away from a central locale to distances ranging between 100 and 200 km. “Now, where they’ve run into a problem,” he continues, “What if we have, say, a biological attack… something that literally wipes out everything within 125 km of New York?” To guard against such grim possibilities, Bundy says that many metro area customers, such as those in Manhattan, now seek to replicate data over even longer distances, to such disparate places as London, California, and Texas. “Certainly, you’re not going to pay for a piece of fiber and WDM to go long haul, at those kinds of distances,” adds Bundy. “It really isn’t practical-too expensive.”
Bundy says that to achieve such long-range capabilities, large enterprises with critical data-center requirements are left to consider channel-extension technologies such as Fibre-Channel-over-SONET and Fibre-Channel-over-IP (FCIP). “Fibre-Channel-over-SONET requires you to put in a separate circuit for your SAN traffic,” he explains. “Most of my customers have started with wanting Fibre-Channel-over-SONET, because the data center guys are saying, ‘No way will I mix my storage traffic with the IP traffic.’”
However, Bundy observes that, as the latest generation of FCIP gateway platforms has earned high marks for performance, enterprises are “taking a real strong second look at them.” Bundy maintains that such FCIP systems represent a very inexpensive way for enterprises to tunnel storage traffic over an existing LAN, sharing such infrastructure for connection to remote locations.
In contrast, Enterprise Management’s Karp points out the difficulty of maintaining geographically distributed sites, such as cross-country IT centers or globally remote metro disaster-recovery operations. “It’s a challenge to be able to manage a nonmanned Fibre Channel site-I mean that’s pretty much impossible,” he contends. “But as long as you have an IP address, you can manage IP-based storage. So that’s one definite advantage for iSCSI, right there.”
Counters Bundy, “If your recovery time objective…is hours, days, or weeks, then maybe iSCSI or IP is okay for connecting your sites. If your objective is a little more stringent, then maybe Fibre Channel over SONET is a good solution. But if you want to recover instantly; the minute you have a failure at location ‘A,’ you recover instantly in location ‘B,’ then WDM is the winner every time.” Bundy also reiterates pure WDM’s superiority in scenarios where two such locations are required to share storage and processing power, as in the case of mainframe clustering via Fibre Channel.
Forrester’s Balaouras concedes that large enterprises that have already invested in Fibre Channel to date will most likely continue to do so, especially in core data center environments. “Fibre Channel is definitely much more complex,” she allows. “And it costs a whole heck of a lot more-but in terms of reliability and speed, it’s still much better.”
“I’m a fan of Fibre Channel, too, but I’m a much bigger fan of getting what you pay for,” concludes Karp. “I think we’re at the point now where Fibre Channel still has marginally greater throughput, but the fact remains that, particularly in long-haul operations, iSCSI clearly offers value that fiber doesn’t. From an IT manager’s standpoint, being able to play the various technology vendors against one another, you can basically start to make IT decisions based on spreadsheet analysis and not on a technology analysis. And I think that’s much better for just about everybody.”