Unified data centre fabric primer: FCoE and data centre bridging

Primer

Unified data centre fabric primer: FCoE and data centre bridging

What is a unified data centre fabric?

A unified data centre fabric is a networking fabric that combines traditional LAN and storage area network (SAN) traffic on the same physical network with the aim of reducing architecture complexity and enhancing data flow and access. To make this work, the traditional Ethernet network must be upgraded to become "lossless" and provide additional data centre networking features and functions. In turn, the storage protocol must be altered to run on Ethernet.

Data centre bridging: Upgrading Ethernet

The IEEE has defined the term Data Centre Bridging (DCB) to be an architectural collection of Ethernet extensions designed to improve Ethernet networking and management in the data centre. DCB is also known by the terms Converged Enhanced Ethernet (CEE), Data Centre Ethernet (DCE, trademarked by Cisco), Enhanced Ethernet for Data Centre (EEDC), and other similar terms. DCB adds four basic functions to the existing Ethernet infrastructure to enable new capabilities for unified fabrics:

  • Traffic differentiation -- DCB can distinguish among LAN, SAN and IPC traffic.
  • Lossless fabric -- required for SAN traffic.
  • Optimal bridging -- allows the shortest path bridging within the data centre.
  • Configuration management -- provides configuration management functions that work with the existing infrastructures of Fibre Channel and Ethernet.

These enhancements to Ethernet allow traffic to be paused rather than having packets dropped, which makes the lossless function required for storage traffic.
,

IEEE 802.1 is the collection of standards for the functioning and management of local area networks (LANs) and metropolitan area networks (MANs). Several new specifications are being added to IEEE 802.1 to provide for the new functions that are required, and ratification of these is expected in 2010.

These are:

  • 802.1aq -- Shortest Path Bridging
  • 802.1Qau -- Congestion Notification
  • 802.1Qaz -- Enhanced Transmission Selection
  • 802.1Qbb -- Priority-based Flow Control

These enhancements to Ethernet allow for traffic to be paused rather than having packets dropped, which makes the lossless function that is required for storage traffic and which is standard in Fibre Channel SANs. These enhancements also allow traffic to be grouped so that administrators can guarantee a specific bandwidth and priority for various types of traffic.

Combining FCoE and data centre bridging

Fibre Channel over Ethernet (FCoE) combined with DCB is currently the only way to converge local Ethernet and Fibre Channel traffic together coming out of a server and through a top-of-rack switch. FCoE is the first major application to take advantage of the enhancements to Ethernet made by DCB. INCITS T11, the Fibre Channel standards group, has approved FCoE as a standard for storage traffic.
Because FCoE uses the existing Fibre Channel (FC) protocol on the new lossless Ethernet (DCB), the encapsulated protocol and behavior are the same as traditional Fibre Channel.

FCoE switches

FCoE fabrics must be built with switches that support DCB and FCoE, and these switches must interoperate with existing FC switches, support all FC advanced features and operate identically on FC and FCoE fabrics.

Typically, the top-of-rack switches that support DCB and FCoE have 10 Gb Ethernet (GbE) ports and optionally contain either 4 Gb or 8 Gb Fibre Channel ports. This allows these switches to handle all the LAN and SAN traffic within a rack but forward that traffic to separate existing LAN and SAN infrastructures elsewhere in the data centre. Previous generations of Ethernet switches did not provide full-function Fibre Channel ports for storage traffic, and current Fibre Channel switches do not provide full-function Ethernet ports for general LAN traffic.

There are technologies used for long-distance transport of storage traffic over Ethernet, such as Fibre Channel over IP (FCIP) and Internet Fibre Channel Protocol (iFCP), but these run on traditional Ethernet and are subject to the same congestion problems that DCB is designed to overcome. In addition, there are various types of connections to SONET and other long-haul networks that run converged traffic over very long distances, but these technologies simply provide transport over long distance without any of the local management and control features.

More on unified data centre fabrics
FCoE roadmap: Do you need a unified fabric strategy?

Converged Enhanced Ethernet: New protocols enhance data centre Ethernet

Optimization of the data centre with 10 Gigabit Ethernet

Unified data centre fabric benefits

One of the benefits of a unified data centre fabric is reduced cable count. Consider a rack full of servers and all the cables that are typically found in these racks. For a rack of 20 servers, each server might have four, six or more 1 Gb NIC ports and two 4 Gb Fibre Channel ports. For a rack of 20 servers with four NIC ports and two FC ports per server, that is 120 cables in that rack with 12 Gb of total network bandwidth per server. This rack would also have two separate switches at the top of the rack. In a unified, converged network, that cable count could be reduced to two cables per server, which would be 40 cables in the rack and 20 Gb of total network bandwidth per server. In addition, a rack of servers that used converged networking would require only one switch at the top of the rack.

Cutting power consumption with unified data centre fabric

Consider the power consumption of the rack of servers described above. If a server has four NIC ports, this is often accomplished by using two NIC ports on the motherboard and two NIC ports in one or two additional adapter cards. The Fibre Channel ports are usually provided by one or more additional adapter cards. With a converged, unified fabric, a single adapter card runs TCP/IP and storage traffic, resulting in fewer adapter cards and lower power consumption. In addition, power consumption is reduced by having a single switch at the top of the rack, rather than having a separate LAN and SAN switch.

Data centre bridging: What if you don't use Fibre Channel?

Those who do not use Fibre Channel might ask whether DCB and converged networks will be of any benefit. If your storage traffic is exclusively file server traffic using either Common Internet File System (CIFS) or Network File System (NFS) protocols, or if you use iSCSI as your SAN, then you can run this traffic over your "old-fashioned" Ethernet or over the newer DCB. Demartek hasn't run extensive testing in the labs comparing running these Ethernet-based storage protocols on traditional Ethernet vs. DCB, but improvements for these protocols running over DCB is likely because of the lossless characteristics and new management and bandwidth QoS features.

Data centre bridging capable 10 GbE switches

The switch vendors that support DCB are building this capability into their 10 GbE switches. Generally, DCB isn't built into 1 GbE switches, although many of the 10 GbE switches have 1 GbE ports. So those who currently run Ethernet-only storage protocols can continue to run these protocols as they migrate to 10 GbE. Some switch vendors have not yet announced DCB capable switches. Some are waiting to see how the market matures around the capabilities provided by DCB, and some are waiting for the standards to be ratified.

It is currently best to think of unified, converged fabrics as you think of new data centre build-outs and as you begin to plan new infrastructure for the medium to long term.

About the author Dennis Martin is the founder and President of Demartek, a computer industry analyst organization with its own on-site test lab. Demartek focuses on lab validation testing and performance testing of storage and related hardware and software products. Dennis has been working in the Information Technology industry since 1980, primarily involved in software development and project management in mainframe, UNIX, and Windows environments. These include a variety of large and small end-user customers, and engineering and marketing positions for storage vendors such as StorageTek. Dennis is the founder of the Rocky Mountain Windows Technology User Group in Denver, has made numerous presentations at conferences and has authored many industry articles.

 

This was first published in January 2011