Virtualizing a Storwize V7000 with an IBM SVC

IBM have offered Enterprise Storage Virtualization since June 2003 with the IBM SAN Volume Controller (SVC).  October 2010 saw IBM releasing the Storwize V7000, taking the SVC code and packaging it into a midrange disk product.  So now you have four possible choices:

  1. Use SVC to virtualize your storage.
  2. Use Storwize V7000 to provide internal SAS drives plus virtualize your storage.
  3. Use Storwize V7000 as a midrange disk product.
  4. Use Storwize V7000 virtualized behind SVC.

The great thing is that all four choices are valid and all four choices work just fine.
But for customers already using  SVC, or considering  SVC, the question then becomes, should I virtualize a Storwize V7000 behind an SVC?  Does this makes sense?

The short answer:  YES!

We have  a great many customers happily doing this, so I thought I would share some common questions I get around configuration.  Firstly there is an InfoCenter page on this which you will find here.  Secondly there is a debate about whether we should create individual volume/arrays on the Storwize V7000 or just create a single pool on the Storwize V7000 (which equates to striping on striping).  More bench marking is being done to see if one method is truly better than the other, so until then I recommend the method described below.  If you have already done stripe on stripe, don’t go changing anything until I update this post.

How many ports should I use for  Zoning?

The Storwize V7000 has 8 Fibre Channel ports, 4 from each node canister.   You need to zone at least two ports from each node canister to your SVC cluster.   This is no different to how you would zone a DS5100 or an EMC VNX.

How will the SVC detect the Storwize V7000?

On the SVC you will see two storage controllers, one for each node canister.  This is quite normal.  The reason for this is that each node canister reports its own WWNN.  This is not a problem and will not affect volume failover if one node canister goes offline.

In the example below the SVC has detected two new controllers.  The confusing factor is that both report as 2145s, but they are a Storwize V7000.  Rename them to reflect what they really are (something like StorwizeV7000_1_Node1 and StorwizeV7000_1_Node2).

How should I define the SVC on the Storwize V7000?

You need to create a new host on the Storwize V7000 and call it something like SVC_1.  if the SVC WWPNs don’t appear in the WWPN dropdown, you will need to manually add them as shown below:

You can get the SVC WWPNs from your existing zoning or by doing an svcinfo lsnode against each SVC node or display them in the SVC GUI as shown below:

What size Storwize V7000 volumes should I create?

My recommendation is to do the following on the Storwize V7000

  1. Create arrays of preferably 8 disks in size.  The ideal number will depend on how many disks you have.  On my machine I have 22 disks, so I create three arrays each with seven disks (and one hot spare):
  2. Create one pool for each array:
  3. Create one volume out of each pool (using all space in the pool).
  4. Define the SVC to the Storwize V7000 as a host (as described above) and map all volumes to the SVC.
  5. On the SVC detect all the Storwize V7000 LUNs as MDisks and create one pool.
  6. Now you should have a pool on the SVC that you can use to create volumes to present to your hosts.  They will be striped by default, which is exactly what you want.

Hopefully all of this makes sense.   Questions  and comments very welcome.

Advertisements

About Anthony Vandewerdt

I am an IT Professional who lives and works in Melbourne Australia. This blog is totally my own work. It does not represent the views of any corporation. Constructive and useful comments are very very welcome.
This entry was posted in IBM Storage, SAN, Storwize V7000, SVC and tagged , , , , , . Bookmark the permalink.

33 Responses to Virtualizing a Storwize V7000 with an IBM SVC

  1. Dennis Skinner says:

    “The short answer: YES!” What is the long answer? You’ve only told us how, not why.

    IIRC, you get certain advanced features with the V7000 that you have to pay for on the SVC. If you put the V7000 behind the SVC, you can no longer use them w/o paying for the SVC licenses.

    So…..back to the question I ask every time a VAR mentions this option. What is the advantage of doing this other than easy migration between midrange subsystems?

    Don’t get me wrong. The controller code and SMClient software for the DS4k and DS5k are clunky and annoy me greatly. I’m leaning towards purchasing the V7000 and/or XIV just to get away from them, but I’m not sure this reasoning is very marketable for IBM (“buy our new product b/c our old ones have really outdated management interfaces”).

    • Hi Dennis. You raise some great points. The bottom line here is that Storwize V7000 is valid either as a virtualization layer or as a midrange storage product, and it is priced that way. You can buy Storwize V7000 without external virtualization, saving you money and giving you a great midrange mid-price product.
      For clients who already have SVC,they often bought SVC because they wanted to bring multiple storage devices under the management of a single layer. if they choose to purchase Storwize V7000 rather than another midrange storage product, there is no sin in putting it behind SVC. Your point about having to use the copy services functions at the SVC layer is a very good point. With XIV we ‘fixed’ this with a new pricing model, I agree it would be nice to see a similar model brought in Storwize V7000 behind SVC.

  2. Angel Rivas Pazos says:

    Very good post and I recommend the configuration you explained. The vDisks on the SVC will be stripped, if a person made one mDiskGroup with all mDisk included (coming from the external Storage, in this case, a Storwize V7000) which brings better performance for applications running with it. If it is possible, could you made a post with the the advantages of a SVC with Storwize V7000 virtualized, or using a Storwize V7000 without SVC? Thanks in advanced and keep the good work.

    Best Regards/Saludos
    Angel Rivas Pazos
    IBM – IT Specialist Storage
    Sametime: Angel Alejandro Rivas
    @AngelRivasPazos

    • Thanks for the comment and I certainly agree, some more detailed information is still needed here.
      As I said, I get asked this a lot, so I wanted to get the basics down so people can get started.
      I will certainly follow up with your suggestions.

  3. Pingback: Virtualizing a Storwize V7000 with an IBM SVC « Storage CH Blog

  4. TMasteen says:

    Hello Anthony,
    Maybe an addition to you great post:

    How should I define the SVC on the Storwize V7000?
    If you have more than 2 nodes in your SVC cluster you can define the SVC as a host with the CLI.
    For example: if you have a 6 node SVC Cluster, you will have 24 ports. With the GUI you can only add 16 ports!
    To add the remaining 6 ports you will have to use the addhostport command.

    But these are just details. We did configure the V7000 behind a SVC (without any issue), and it works great.

  5. Pascal Petit says:

    Hello,

    If the GUI avoid to add more than 16 ports, is it not to avoid to get too much path on V7000 volume (16)?

    • Good question.
      The issue is that we can define up to 512 WWPNs per host as per here: http://www-01.ibm.com/support/docview.wss?uid=ssg1S1003902
      But the GUI only allows you to define 16 at a time.
      You can then use the GUI to add 16 more, so you dont need to use the CLI.
      In truth it is really unusual for a host to have more than 16 WWPNs (thats a lot of HBAs!)
      but an SVC with 8 nodes would have 32 WWPNs and would look like a single host to the Storwize V7000.

      There is a reverse limitation which is that a storage device cannot present more than 16 WWPNs to an SVC, but since a Storwize V7000 has only 8 ports, thats not going to be an issue.

  6. TMasteen says:

    Anthony,
    In chapter “How will the SVC detect the Storwize V7000” I understand that the SVC will see two controllers.
    My question: will every controller see some (the half) of the mdisks or will all mdisks be visible under one controller?

    Thanks.
    TMasteen

    • Great question.
      The SVC will indeed see each Storwize V7000 node canister as a separate controller. Roughly half the MDisks should appear to be coming from each controller (based on the node preferences of the volumes themselves on the Storwize V7000). This means the SVC will send roughly half the traffic to one node canister and half to the other.

  7. Martin Guadarrama Roman says:

    Hi Anthony,

    I’m using the V7000 as the third site for quorum on a 2-node SVC-split configuration. I’m using only the first two ports as longwave on each V7000 node. Each V7000 ports is zoned to the two SVC ports on the same switch.

    Is there a reason why I’m seeing the SVC ports on the V7000 as degraded? Is it normal?

    • Hi Martin.

      Are you zoning two ports from each node canister to the SVC cluster?
      The SVC wants a minimum of two paths to every managed disk controller (for redundancy), if you are using only one port per node canister that would be the issue (but you do say you are using two ports per node… so maybe that is not the reason).

      • Martin Guadarrama says:

        Hi Anthony, yes, I have one port on each V7000 node to one port on each SVC node. The other thing is that even that I’ve presented 4 volumes to the SVC, 2 on each preferred node, the SVC sees all 4 mdisks through a single controller, not balanced on the two V7000 controllers.

      • Have you got any ‘Recommended Actions’ under the Events tab?
        Maybe you have an open event that needs to be ‘fixed’.
        Suggest you place a call with IBM.

      • Martin Guadarrama says:

        Thank you Anthony, not on the SVC side, but on the V7000 there is a recommendation to connect the ethernet cable on the second node, and we are waiting for facilities to have this cable connected. Otherwise, everything looks ok, besides a pending firmware upgrade on the switches, as they are not in a supported version. Could that be the cause of this behavior?

  8. John Sexton says:

    Hi Anthony,
    I have a client with SVC managed storage in prod site and DR sites with Global Mirror replication (across FCoIP). Client wants to replace all storage with V7000, the image mode solution you discuss above will work where V7000 becomes another storage controller behind SVC. But for DR requirement where replication is required at all times, can this work when creating image mode vdisk from managed vdisk.
    Is there any other options to achieve the clients objective and minimize downtime per host.

    • Hi John.
      We need an outage because we are totally switching from one subsystem (SVC) to a different subsystem (STorwize V7000).
      The trick to do is this.
      On SVC we have A –> GM –> B
      On V7000 we create two new volumes C and D and present them to SVC as image mode.
      Then we use volume copy at each site to mirror the data at each site so you get this:

      SVC: A–>GM–>B
      | |
      V7000:C D

      Then we take an outage at site A and wait for GM to be full sync, then break the GM.
      Then we split off the Volume copies so C and D are independant. Then we start GM on V7000 and do this:

      V7000: C –> GM –> D
      Choosing the option that C and D are already in sync (which they are).

      Then we change zoning to point host to V7000 and map volume C to the host.
      Then bring up the host.

  9. Louis van der walt says:

    We are using the V7000 as the third site for quorum on a 4-node SVC Cluster. I’m using only the first two ports as long wave on each V7000 node to connect each node redundantly to Fabric_A and Fabric_B. On the SVC Interface going to Networking -> Fibre Channel storage systems view We notice on the V7000 Node1 all SVC Ports are Active but on the V7000 node 2 the state either indicates active on SVC Node1 only or inactive on all nodes, any advice?

  10. Martin Guadarrama says:

    Hi Anthony, I have another 4-node SVC split-cluster implementation with an Storwize V7000 for qourum disk. I’m having the same issue where the SVC shows degraded on the V7000 as soon as I map any volume. At the time to create the SVC host object on the SVC it shows online, but when volumes are mapped it goes to degrade. I’m using only two ports from each node on the V7000 but they are zoned to all 4 SVC nodes. Any advise? It happened to me before and didn’t find the cause nor the solution.

  11. Melih says:

    Hi Anthony,

    first of all thx for your excellent article..i have 3 open questions.
    1. after zoning a v3700/v7000 to a SVC Split Cluster (Code6.4.1.2 – latest one) and mapping the Luns to all SVC Ports
    (1 Host with 8 WWPNs), i see two storage controllers, one for each node canister, as you describe above…
    but only one controller have all my mapped Luns from v3700/v7000 and NOT “roughly half the mdisks” each controller,
    as you said in your comment 8. Is that correct so far?

    my second question is,
    many redbooks about SVC Split Cluster recommend to choose the right preferred SVC Node for a Lun, so Lun(Storage) and SVC Node reside in the same Location ( Domain ) .

    but i am not able to change the default/preferred SVC Node for a Volume(Vdisk) after creating one the Lun.. while creating itis possible to choose the preferred Node, but not later, right?

    so what i should to do to fulfil the recommendation ? Is there a possibility to change the preferred SVC Node for a Lun after creating?

    last question..
    in the chapter “General SAN Configuration Rules” (Information Paper: Guidelines for configuring SAN Volume Controller Split I/O Group Clustering, Information Center Errata, Version 6.3.0, Nov 18, 2011″ it says

    “Avoid using inter-switch links (ISLs) in paths between SAN Volume Controller
    nodes and external storage systems. ”

    so in a split-cluster SVC environment my V3700 has one Host (SVC-Cluster) with only 4 WWPNs, because if i zone each each SVC Node Port, i don´t avoid using isls between SVC and Storage right?

    Regards
    Melih

    • Hi.
      The unbalanced LUNs does not sound right, you should get half on each.
      The preferred node plays into this, it round robins at volume creation time, although you can change it from the advanced button.
      After creation you can always change node preference, there is a slight pause in IO as the cache ownership for that LUN switches between nodes.
      You can do this with GUI or CLI.
      As for avoiding ISLs, it is always good advice as ISL fanin/fanout ratios are often not well understood, so ISLs can become bottlenecks.
      Having only zoned four WWPNs is not a problem.

      • Melih says:

        so it seems to be a bug on v3700 becauce i choose for many luns differet preferred nide but only one controller has all luns.. and it seems to be that in svc code 6.4 it is not possiblr to change the preferred node for a vdisk.. i cant edit the preferred node in the advanced mode oder in the cli using chvdisk, there no -node parameter availaable like in the earlier svc code versions…

  12. Rudi says:

    Hi,

    Any thoughts about using a V7000 running compression, behind a SVC. How would this work, if at all ?

    • It would work but the V7000 mdisks would have to be thin provisioned to take advantage of RTC. Thin provisioned mdisks are not normally supported as the SVC has no way of knowing if the backend disk is running out of space.

  13. Thorsten says:

    Hi Anthony, really great blog – in the beginning you mentioned striping on striping and not to change it, unless you changed your blog. Did you do some performance test regarding the different implementations? What if we are running at striping on striping. Is there a need to change to multiple pools or a single pool with dedicated mdisk assignment? We are getting a lot of 984003 messages (compression active)- but genreal IOPS and MB/s on V7K as well on SVC are low.

  14. Nico says:

    A bit off-topic; what would you say are the main benefits of SVC over V7K if you think them as competing solutions? SVC+DS or V3700 or whatever for diskspace or V7K mirrored, are the options. Of course the split cluster provides awesome availability, but other than that?

  15. Savona Francesco says:

    Hi Antony, if i want virtualize a Ds4700 with a Storwize V7000 how can i zone the DS4700 Ports?
    I’ve 4 Ds ports: DS4700A_1, DS4700A_2, DS4700B_1, DS4700_B2.
    I’ve 4 V7000 ports zoned with hosts: V7000A_1, V7000B_1, HOSTA_1 * V7000A_1,V7000B1,HOSTB_1 * V7000A_2,V7000B_2,HOSTA_2 * V7000A_2,V7000B_2, HOSTB_2

    For zoning V7000 with DS i must use other 2 FC ports did you mean?
    V7000A_3 V7000B_3
    V7000A_4 V7000B_3

    TKS.

  16. Rouffa says:

    Hello Anthony,
    After activating rtc on V7K CPU get high, it was related to workload running on V7K and to the rule of 1/4 cpu dedicated to I/O and 3/4 to RTC.
    Client decided to add an SVC cluster
    RTC workload will be done using the SVC.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s