The evolution of IBM SVC connection choices

IBM SAN Volume Controller (SVC) has offered Fibre Channel Storage Virtualization since June 2003.   Two SVC nodes communicate with each other via fibre channel to form a high availability I/O group.  They then communicate with the storage that they virtualize via Fibre Channel and with the hosts they serve that virtual storage to, via Fibre Channel.  When IBM added real-time (metro mirror) and near real-time (global mirror) replication it was also done using Fibre Channel, with each SVC cluster communicating to the other by connecting using fibre channel protocol transported over dark fibre with or without a WDM or via FCIP (Fibre Channel over IP) routers.

Each Fibre Channel port on an SVC node can be a SCSI initiator to backend storage, a SCSI target to hosts and all the time communicate to its peer nodes using those same ports.  With every generation of SVC node, these ports got faster and faster, going from 2 Gbps to 4 Gbps to 8 Gbps.  In SVC firmware V5.1 IBM added iSCSI capability to the SVC using the two 1 Gbps ethernet ports in each node. This allowed each node to also be an iSCSI target to LAN attached hosts.

When the Storwize V7000 came out in Oct 2010 it offered all of this capability, plus offered two fundamental changes to the design.

  1. Firstly the two controllers in a Storwize V7000 can communicate with each other across an internal bus, eliminating the need to zone them together (or even attach the Storwize V7000 to Fibre Channel fabrics).
  2. The other more obvious difference is that a Storwize V7000 comes with its own disks, which it communicates with via multi-lane 6 Gbps SAS.

When IBM added 10 Gbps Converged Enhanced Ethernet adapters to the SVC and to the Storwize V7000, these adapters operated as iSCSI Targets, allowing clients to access their volumes via a high-speed iSCSI network.  In V6.4 code IBM allowed these adapters to also be used for FCoE (Fibre Channel over Ethernet).  These are also effectively SCSI targets ports allowing hosts that use CEE adapters to connect to the SVC or V7000 over a converged network.

If you have a look at the Configuration limits page for SVC and Storwize V7000 version 6.4 (the Storwize V7000 one is here),  you will see this interesting comment:

“Partnerships between systems, for Metro Mirror or Global Mirror replication, do not require Fibre Channel SAN connectivity and can be supported using only FCoE if desired”

So does this mean we can stop using FCIP routers to achieve near real-time replication between SVC clusters or Storwize V7000s?  The short answer is most likely not.  Lets look at why…

The whole reason Fibre Channel became the standard method to interconnect Enterprise Storage to Enterprise hosts is simple:  Packet loss is prevented by buffer credit flow control.  Frames are not allowed to enter a Fibre Channel network unless there are buffers in the system to hold them.  Frames are normally only dropped if there is no destination to accept them.  Fibre channel is a highly reliable, scalable and mature architecture. When we extend Fibre Channel over a WAN we do not want to lose this reliable nature, so we use FCIP routers like Brocade 7800s, that continues to ensure frames are reliably delivered in order, from one end point to another.

Converged enhanced ethernet allows Fibre Channel to be transported inside enhanced ethernet frames.  The one fundamental that CEE brings to the table is the same principle that a frame should not enter the network without a buffer to hold it.  Extending FCoE over distance has the same challenge: the moment you start moving those frames over a WAN connection you need to ensure frames are not lost due to congestion. How do we do this?  The same way we did with Fibre Channel:  we use Dark Fibre, we use WDMs or we use routers.  The same issues and requirements exist.

For more information on FCoE over distance check out this fantastic Q&A from Cisco:

If you want to understand FCoE better, this document from Brocade is very good:


About Anthony Vandewerdt

I am an IT Professional who lives and works in Melbourne Australia. This blog is totally my own work. It does not represent the views of any corporation. Constructive and useful comments are very very welcome.
This entry was posted in Brocade, Cisco, SAN, Storwize V7000, SVC and tagged , , , , , , , . Bookmark the permalink.

5 Responses to The evolution of IBM SVC connection choices

  1. John Sexton says:

    Anthony, nice blog and good point about FCoE, my take on the issue is that technically it may be possible, but practically it will not (not yet anyway). Brocade appear to have a better perspective on FCoE than Cisco as they see it more from a storage point rather than network with a number of valid issues. From my research on FCoE there appears to be a recommendation to segregate it from other CEE traffic for storage data, to the extent of separate CNAs in a host. Thus defeating the purpose a combined network architecture.
    I tend to look at a more performance orientated SAN, in FC terms it is still good practice to separate disk and tape drive traffic for a host by having separate HBAs on the host, the reason being the type of data transfer is so different. What would a recommendation be for a CNA and FCoE in such a scenario. I have yet to read where FCoE is tested with other ethernet TCP/IP traffic for performance on a host with CNA .
    As for longer distances and support of MM and GM, it would be a challenge for a LAN/WAN architect to provide a solution where CEE is extended beyond a data centre with a view of supporting MM or GM with or without TCP/IP. But then I am not a network expert.

    • Thanks for great reply John, you add a whole new perspective. I was concerned that some people would view mirroring over long distance with FCoE as a viable real world solution, I am dubious anyone is doing it for the very reasons you discuss.

  2. Pingback: The evolution of IBM SVC connection choices | I Love My Storage

  3. Anand Gopinath says:

    Hi Anthony,

    We have to install and setup a 4 enclosure V7000. Is a single hot spare per enclosure good enough ???

    Also have you encountered any issues with V7000 Code ???

    • One spare per enclosure is fine
      Once you have 4 spares that should be more than enough unless you have lots of different size drives looks stable. I alway avoid x.x.x.0. By the time you get to x.x.x.2 you at normally safe to go

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s