About me

I am an IT Professional who lives and works in Melbourne Australia.
This blog is totally my own work.
It does not represent the views of any corporation.
Constructive and useful comments are always very welcome, but spam, trolling and abusive comments will not be posted.

42 Responses to About me

  1. Mark says:

    Hi,

    I just started looking into doing PowerHA with XIV Synchronous System Mirror between two sites (less then 100km) and I am looking at any info about it. Do you have any advice in this area you could share with me?

    With the best!

    MarkD:-)

  2. avandewerdt says:

    Hi Mark.

    Great blog BTW, I have added it to my links section.
    As for the PowerHA, I don’t have anything to hand but I will keep my eyes open.

  3. Wilson says:

    Hi Anthony,

    As the storage expert, I like to inquire about a relatively simple question. Can you expand on how that is accomplished. Does it use the storage manager software to apply the binary to each controller or running an executable? Also, during the upgrade, does the unit have to be downed or if running dual controllers, able upgrade one while running on the 2nd?

    Thanks!

  4. avandewerdt says:

    Hi Wilson.
    Your commenting on the ‘About Me’ page.
    Did you mean to comment on a specific post that I created?

  5. Jens says:

    Hi,

    can you please fix up the rss feed?
    https://aussiestorageblog.wordpress.com/feed/

    TY!

    • avandewerdt says:

      Hi.

      I am confused. The link on the front page is correct (it matches the one you give) which I believe is the feed….
      If I use that URL in Google Reader it works without issue (as far as I can tell).

      What I am doing wrong?

  6. Bharath Nagaraj says:

    Hi Anthony,

    We had an issue with SVC handling EMC disks because of which we had to take out zoning between EMC and SVC (The issue actually took down the SVC cluster). But the SVC still has the MDISKs, storage pools and Volumes definitions from EMC disk which are in offline state.

    Now we plan to delete the definitions from SVC. Any idea will there be any issues if we try to delete these offline mdisks/pools/volumes from SVC?

    Thanks
    Bharath

    • Hi Bharath.

      There should be no issues doing this although I hope you reported this issue to IBM Service so they can follow up on it.
      You will probably need to delete the VDisks with ‘force’ as well as the pool. When you delete the pool it should delete the MDisks.
      If any MDisks stil hang around, just do a new MDisk Discovery.

  7. Bharath Gowda says:

    Hi Anthony,

    Thanks for the response. Yes we had the issue reported to IBM.

    Thanks
    Bharath

  8. Nick says:

    Hi Anthony, Thanks for the great blog. I’ve have this very odd problem with our XIV’s at work. I’d love it if can get some of your expertise on the matter! :) Do you have an email I can email you at some screeshots

  9. Hi Anthony,

    I am looking for the “traceroute” equivalent for SAN. Do you know about anything like that working with AIX5.3 and above?

    Thanks,

    MarkD:-)

    • Great question. From a host perspective there are several applications that can help, like QLogic SanSurfer and Emulex LightPath but I don’t know if there are AIX versions (I dont think so). You are better off working with the switch itself to do this. There are utlities like fcping on both major vendors switches that have improved drastically over time.

  10. Russell Findlay says:

    Hi Anthony,

    Great blog, for our sakes please keep up the great work! There are very few bloggers who seem able to keep focused on the technical (rather than the political) so its refreshing to read your posts.

    I have a friend who has a DS5300 issue that you may be able to assist with. The Storage system is located in a remote location in Asia and the local channel have not been able to provide clarity. The problem has been logged with support but they are finding it diffucult to resolve the problem without physical access as they suspect the cabling is incorrect or faulty.
    There are 4 FC drive trays connected across all the back end loops plus a SATA drive tray daisy chained off one of the FC trays. Both the FC and SATA trays on the same loop are showing degraded status (2 Gbps instead of 4).
    We have ensured that the cabling looks like that recommended in the redbook but do you have a suggestion as to how this system should be cabled to best suit the 5 mixed encolures?

    Thanks in advance,
    Russell

    • Hi Russell.
      THe cabling is indeed the most likely culprit.
      Normally what I do is create a visio for the client and map out the cabling.
      Because the DS5100 has up to 8 loops, I normally spread out the enclosures as much as possible, so I ponder why they are all cabled together?

  11. Dennis Skinner says:

    Here is a question/potential blog post. What is the preferred method of performing maintenance on the backend storage behind an SVC? We either move all the vdisks to other storage or shutdown the hosts accessing the vdisk and then do firmware updates, but SVC complains even if there are no vdisks on the pool/controller. Seems like there should be a way to put the controller into “maintenance mode”. Thanks!

    • Great question.
      Right now there is no way to signal to the SVC that a controller needs to go offline, so there is no way to stop the complaints when it does.
      The only way to stop the errors would literally be to delete the entire pool (after evacuating its contents somewhere else).
      Most clients I work with now use Volume copies (also called VDisk mirrors) to protect data on storage subsystems they are doing maintenance on.
      It wont stop the complaints, but doing this gives the highest level of availability.

  12. Jarrod Smithers says:

    Hi Anthony,

    Great blog, please keep it up. I have a question regarding optimisations on our Informix database as we are currently migrating to a Gen 3 XIV. We use VxVM and I am interested in the performance improvements we can make. We are going to test using different HBA queue depths on the host and different LUN sizes, but I’m unsure at the moment if changing the i/o write size in VxVM and/or striping with VxVM will prove useful. Have you any opinions on this?

    Thanks

    • Hi Jarrod,
      Less LUNs and more queue depth both normally have a positive effect on performance. Write size is not such a big issues as all writes are to cache and because XIV uses 1 MiB partitions, pretty well any commonly uses write size will fit inside that. Striping with VxVM is not needed as the data on an XIV is already very widely striped (180 MiB of contiguous logical block addresses is striped across 180 disks – thats about as wide as you can get!).

      • Jarrod Smithers says:

        Hi Anthony,

        Thanks for that, confirms what we were thinking. I’ll let you know what our results show.

  13. Mark says:

    Great blog Anthony – full of nuggets! Thanks!

  14. Damien says:

    Hi Anthony! How are you. Your blog is great and i read it a lot for all concerning Storwize v7000. Do you know if v7000 with firmware 6.4 is supported for Vmware SRM 5? I’ve already check at http://www.vmware.com/resources/compatibility/search.php?deviceCategory=sra but if you have any info, it would be great! Thanks!
    Damien

  15. Suman Debnath says:

    Hi Anthony;

    I am Suman Debnath from IBM STG Lab Services, India….
    I was searching for your mail ID but couldnt find it, so I am just posting a message here(thought this is not the right place) …….I got an opportunity with Actifio Engineering Team to work as a Software Test Engineer from India…..
    So I just wanted to discuss with you and take your valuable suggestion…
    Can I have 5-10mins of yours please….

    Can I get your email/skype/cell no. please…..

    I follow your blogs since I joined IBM 2 years back….and I must say almost 40% of my leanings are from your blog :-) And thank you so so much for all your contribution ….

    Eagerly waiting for your response.

    Thank You

    Regards
    Suman
    91 9620102221

  16. Greg says:

    Hi Anthony great Blog. I would like to know what the proper procedure would be to replace a non failed disk in the V7000.
    Eagerly waiting for your response

  17. home warranty maryland says:

    It’s nearly impossible to find knowledgeable people in this particular topic, but you seem like
    you know what you’re talking about! Thanks

  18. Nishit Pillay says:

    Hello Anthony,
    I am fairly new to ibm V7000 Storage. I am planning to write up a capacity and infrastructure Report for the Array my vendor owns.
    I was not able to find a right IBM tool used for V7000 historical data capturing, monitoring and analysis. Could you please guide me or help to with the same .
    Specifically Looking for : something related like the XIV storage management GUI(Monitor-> Statistics) which shows hour, day week, month, year graphs and reports for IOPS, Latency and Bandwidth.
    The V7000 has something similar(Monitoring->Performance) but shows live data and not Historic data.

  19. Eric Derr says:

    Hello Anthony,
    First if this is the incorrect venue for this question, I apologize. I’m in the process of swapping out my old san switches, brocade 3900’s fos 5.3.2a for brocade 5300’s fos 7.1.1
    I am looking for a way to get the alias and zoning information from the old to the new without doing it manually. Direct E-port connections are not supported. Any ideas ?

    Thanks,
    Eric

    • My first thought is to use configdownload and configupload function to achieve this but I am not sure how it will cope with wildly different FOS. I am on the road right now so research will take some time.

      • Eric Derr says:

        That’s what I did. Tried to just download the Zoning information to the new switches, but was getting errors for a corrupted file. So, I removed and saved all the info in my new config file from the Zoning tag on down. Removed everything but the Zoning info from my old switches file, added it to the file for the new with cat >>, and then added back what I had removed first. Haven’t started slowly migrating to the new switches yet, but it looks good so far.
        Thanks,
        Eric

  20. savona francesco says:

    Hi. I must rearranhe a stowize v3700 pool after add a new mdisk. The tool i can run is same of v7000??

    Tks a lot.

  21. savona francesco says:

    Anthony tks a lot for the answer.
    Ultimate question: i can run the script in run time environment or must i run it in zero io condition?

    Tks another time.

  22. Sudharsan says:

    Hi Anthony,

    Great job on the blog. Enjoy reading your blog when I am stuck on an issue. Continue the good work.

    I have a question for you. On an AIX server can I have SVC and XIV LUN’s presented? Would the multipathing software have issues taking control/ownership of the HBA?

    Looking forward to your reply… Have more questions for you.

    Thanks and Regards,
    Sudharsan

    • This is a great question and goes to the heart of MPIO co-existence.
      Both products support the use of PCMs. THe XIV PCM is already in AIX (from 6.1 if I recall correctly).
      SVC can actually use AIXPCM from AIX 6.1 onwards.
      So you can absolutely have them using the same HBA, as their multipathing is using PCMs that are part of the AIX MPIO framework.

  23. Bill Gerard says:

    Hi,

    I am looking for some canned scripts that my customer could use to manipulate flashcopies on a V7000. They want to duplicate the snapshot function currently provided by Netapp’s Snap Manager. Flashcopy manager doesn’t provide this, unless I don’t understand FCM as well as I should.

  24. You are being asked to login because me@you.com is used by an account you are not logged into now.
    By logging in you’ll post the following comment to Why ALUA is a very cool acronym:

    on AIX MPIO you can configure paths with “priority”, and attempt to balance them that way. It’s very clunky (the actual commands to do so).

    If you don’t do this all the disks will be on path0 by default (unless the array PCM makes some kind of provision I think, I don’t know if it can, however)…

    but if you DO do this, and you don’t know which are the ALUA optimal paths, you will screw things up too (you will ‘balance’ stuff to the non-optimal paths)…

    further, if you change priorities of paths once applications are started (and by application, I think I mean varyon lol) AIX doesn’t seem to want to path switch. It seems you must DISABLE the paths it is using to kick it hard enough to get it to actually DO a freaking path switch.

    If you use round-robin, you avoid most of these headaches – it uses the (in my case 4) optimal paths, and only those. BUT, if you then switch controller preference on the (in my case EVA) array, AIX continues, again, to use the same (now non-optimal) paths.

    This is using AIX 7.1, I’m afraid AIX is still clueless. In addition, alhough the chdev -U command has been added, most non-trivial hdisk and FCA parameters, e.g. queue_depth (and algroithm, reserve_policy, max_transfer), are non-modifiable, at least in my config.

    The ‘solution’ is to try to guess your workload, balance the luns as best you can between the controllers before configuring your disks with round_robin, and set max_transfer, and queue_depth for your app, so other apps on the array aren’t competed out, and hope your workload doesn’t change too much. If it does, you can always tell the app folks that the app vendor chose the OS, not you, (I guess…)

  25. Brazinho Diniz says:

    Hello Sir,

    Do you have any documents on SAN zoning for v7000 replication ( Remote Mirroring)

    Thank You,

    Kind Regards,

Leave a reply to Anthony Vandewerdt Cancel reply