Data center considerations: Architecture choices

May 16, 2012
This article will discuss three data center architectures and the tradeoffs of each. Each design has its advantages and disadvantages, and sometimes more than one architecture may be appropriate for your installation.

When designing a data center the primary concerns are reliability, performance, scalability, the ability to support both current and future applications and speeds, and cost. Throw in the desire to make it green and energy efficient, and you have a lot of parameters that need to be addressed.

There are several architectures from which network designers can choose. Which one you use will depend on the size of your current data center, your plans to expand, whether it’s a new installation or an upgrade of a legacy system, and how quickly you expect your data center needs to grow.

This article will discuss three data center architectures and the tradeoffs of each. Each design has its advantages and disadvantages, and sometimes more than one architecture may be appropriate for your installation.

Centralized or Direct Connect architecture
The Centralized or Direct Connect model is a good architecture for smaller data centers (< 5,000 square feet). As you can see in Figure 1, in a centralized architecture you have separate LAN/SAN environments and each one of those have home run cables that go out to the prospective cabinets and zones and then all the way back to a centralized core area. All of your switches and equipment are centralized in one location.

Figure 1. The Centralized or Direct Connect architecture.


  • This is a very flexible design that will enable you to accommodate changes in your data center or SAN environment.
  • This architecture maximizes port use by enabling you to match your switches to your current needs without having a lot of unused ports.
  • This is a secure architecture, since your electronics are in a single location.
  • This architecture offers excellent physical control by reducing the number of administrative ports.


  • This architecture does not scale well, which makes it difficult to support expansions.
  • In larger data centers cable congestion can be a problem, especially coming back to the main distribution where your core switch is located.
  • If you use copper media you may run into some length limitations, especially at higher data rates. At 10 Gigabit Ethernet (GbE), for example, twinax cable supports only 7 meters.
  • This architecture has the highest cabling cost, since so many individual cables are used. However, it’s important to keep in mind that the cabling infrastructure of a network typically represents only 5% of the total network cost.

Distributed or Top of Rack architecture
In a Distributed or Top of Rack (ToR) architecture, the LAN/SAN switches are mounted in the top of the rack and all the servers in the rack are cabled to this switch, which then has one uplink (Figure 2). This approach simplifies cable management and enables fast port-to-port switching for servers within the rack and predictable oversubscription of the uplink and smaller switching domains (one per rack) to aid in fault isolation and containment.

Figure 2. The Distributed or Top of Rack architecture.

Although cabling is used more efficiently in ToR than in Centralized models, it is not necessarily the most cost-effective approach because it often requires more electronics. Under-use of ports occurs when there are not enough servers to fill the switch. More electronics consume more power and also produce a lot more heat.


  • The distributed architecture provides an efficient use of network cabling because most of your cabling is done intra-cabinet instead of inter-cabinet.
  • The distributed architecture enables the efficient use of your floor space, as you are not using space for extra cabinets for equipment at the end or middle of rows.
  • It is the lowest cost option when a “stop gap” solution using legacy equipment is required.


  • The architecture can be limiting as you move forward into newer technologies because the infrastructure to expand is not in place.
  • Having all your switches located at the top of the racks is an inefficient use of your LAN/SAN switch ports. You often end up with unused ports because the capacities and the number of ports per switch that you may deploy may not match the footprint of the server cabinets or your LAN environment.
  • This architecture is not energy efficient. Unused ports still consume electricity and produce heat that needs to be cooled.
  • Distributed architectures are more difficult to manage in large deployments because you would potentially have thousands of switches deployed throughout your data center space.
    It can also present security risks, not only physically but also logically when you are trying to deploy virtualized networks.
  • LAN and switch gear potentially can overheat because of their ToR position.

Zoned Direct Connect/Zoned Distribution architecture
The Zoned Direct Connect (also known as Zoned Distribution) architecture organizes the data center white space by zones (Figure 3). It is currently the most widely used data center architecture and it is the one recommended by TIA 942 Data Center Standards.

Figure 3. The Zoned Direct Connect or Zoned Distribution architecture.

In many ways it represents a compromise between the centralized model and the distributed model, leveraging the best of both. You still have separate LAN/SAN environments, but the electronics are closer to the servers where they need to be accessed and you don’t need to have switches in the top of every cabinet or rack. This approach gives you the flexibility to support legacy architectures and equipment as well as the ability to move forward to support future technologies.

Depending on the location of the access network switch racks, a zone may be end of row (EoR), middle of row (MoR), or dedicated network row. These cabinets are usually locked and secured to prevent unauthorized access; in many cases those switch ports are replicated to cabinets beside them so that patching would be done in the patch panels, rather than with the active electronics. This approach is more secure and reduces the risk of electrostatic discharge hindering the operation of your electronics.


  • The scalable, repeatable design makes it easy to upgrade and expand. You simply add a zone when you need more capacity.
  • The repeatable, predictable design makes it easy to manage.
  • This provides an excellent balance of cable cost and switch port use by maintaining flexibility while maximizing port efficiency.
  • The design keeps cable bundles manageable because you are going very short distances in the row instead of between rows or home running cables all the way back to a centralized area.
  • The architecture enables implementation of network applications with limited cable distances, so you don’t have to worry about cable lengths or the number of connections.
    Lower cabling cost when compared to the Centralized Direct Connect Architecture.


  • It may not be a suitable design for smaller data centers, because of the extra floor space required for the EoR or MoR switch cabinet.
  • It may not work well with existing ToR configurations where the bulk of cabling is “intra” cabinet.

With newer technologies such as 10GbE, 40/100GbE, Fibre Channel over Ethernet, 16G Fibre Channel, and 10G iSCSi, the bandwidth, distance, and connection limitations are more stringent than with lower-speed legacy systems. As you plan your data center’s LAN/SAN environment you must understand the limitations of each application being deployed and select an architecture that will enable you to not only support current applications, but also have the ability to migrate to higher-speed future applications.

Rodney Casteel is Chair of the TIA Fiber Optics LAN Section (FOLS) and technical manager at CommScope.

Sponsored Recommendations

Scaling Moore’s Law and The Role of Integrated Photonics

April 8, 2024
Intel presents its perspective on how photonic integration can enable similar performance scaling as Moore’s Law for package I/O with higher data throughput and lower energy consumption...

Coherent Routing and Optical Transport – Getting Under the Covers

April 11, 2024
Join us as we delve into the symbiotic relationship between IPoDWDM and cutting-edge optical transport innovations, revolutionizing the landscape of data transmission.

Constructing Fiber Networks: The Value of Solutions

March 20, 2024
In designing and provisioning a fiber network, it’s important to think of it as more than a collection of parts. In this webinar, AFL’s Josh Simer will show how a solution mindset...

Data Center Network Advances

April 2, 2024
Lightwave’s latest on-topic eBook, which AFL and Henkel sponsor, will address advances in data center technology. The eBook looks at various topics, ranging...