Migrating a RHEL v5 host to XIV

I just successfully migrated a client RHEL v5 Linux system (using an EMC CLARiiON with PowerPath) to an XIV.  I thought I would document the process we followed and any lessons learnt.

So if you’re interested in how to migrate a Linux Red Hat server from EMC to XIV, read on….

—————————————————————————————–

We start by downloading the latest  XIV Host Attachment Kit (the example here is for XIV RHEL5 64bit downloaded using wget).  This package configures the multipathing and is a really nice tool.  The file name shown in the example may change, so check the IBM FTP site for the latest version.

# cd/tmp
# wget ftp://ftp.software.ibm.com/storage/XIV/host_attach/1.5.2/linux/XIV_host_attach-1.5.2-rhel5-x64.tar.gz

We now install sg3_util and update device-mapper-multipath.    We do this because the XIV Host Attachment Kit wants them.  We used yum to do this, so the process was very simple.    When you install the Host Attachment Kit you will get a warning to install these packages, so we are just getting in early.

# yum install sg3_util
# yum update device-mapper-multipath

Backup the volume group configuration with vgcfgbackup.  This is just in case we need to back out.  This backs up the LVM config to /etc/lvm/backup.  You should see a file for every VG.
# /usr/sbin/vgcfgbackup

Backup the multipath configuration file.  We do this just in case we need to back out.
# cp /etc/multipath.conf /etc/multipath.conf-powerpath

Backup the /etc/fstab.  We do this just in case we need to back out.
# cp /etc/fstab /etc/fstab.powerpath

Call out to your application administrator and make sure they have shut down all their applications.  Now unmount SAN based Filesystems.    This presumes the applications are all stopped!  Confirm with df that all SAN based filesystems are unmounted.  I don’t show the output of the df command.
# df
# umount /oracle/oradata/TEST1/arch
# umount /oracle/oradata/TEST1/data01
…. etc
# df

Deactivate all SAN based volume groups to be migrated.  This command deactivates all volume groups (as I am presuming your taking a total outage and moving all your VGs).  If you’re not, you will need to specify the desired VGs.  It will not take the system VG offline.  You should get a message for each VG showing that no logical volumes are active in each VG.
# vgchange –an
0 logical volume(s) in volume group “Ora-Data” now active

Export your SAN based volume groups using vgexport. This command exports all the VGs (except the system VG).  You should see an export message for every VG.
# vgexport –a
Volume group “Ora-Data” successfully exported

Now remove EMC PowerPath.  First determine the correct package name and then erase it.  Your version number may be different than the one shown in this example.
# rpm -qa | grep power
EMCpower.LINUX-5.3.1.00.00-111
# rpm -e EMCpower.LINUX-5.3.1.00.00-111

Edit your fstab to comment out all application file systems (insert a # at the start of each line),   but don’t comment out your swap file… you will regret it.
Set all applications to not automatically start on reboot.
Confirm you can get to the machine remotely (via whatever remote interface your server hardware offers).
Shutdown the server:
# shutdown -h now

On the SAN switch(es) change the active zone set so that Redhat Linux server cannot access the EMC CLARiiON host ports, it no longer needs to communicate with the EMC CLARiiON.

On the EMC CLARiiON (presume using Navisphere), map the hosts volumes to the XIV.  This presumes you have already defined the XIV to the EMC CLARiiON.   There are details in this Redbook.

Define and activate the Data Migration process on XIV.  You will clearly need to know the Host IDs you used when you mapped the volumes to the XIV.    Don’t map the volumes on the XIV to the host yet (we will do this soon).

Restart the server and wait for Linux to boot up.   It should restart without EMC PowerPath; without the application started and without any SAN disks.

Logon to Linux and untar and install the XIV Host Attachment Kit.
# cd /tmp
# tar -zxvf XIV_host_attach-1.5.2-rhel5-x64.tar.gz
# cd XIV_host_attach-1.5.2-rhel5-x64
# ./install.sh

The HAK will install in /opt/xiv/host_attach/bin but you will not need to go to that directory to issue XIV commands.

Now logon to the XIV define the host and map the migrating volumes to the host.

Now return to the Linux host and run xiv_attach and follow all the prompts (mainly you just hit enter several times):
# xiv_attach

Validate the Host Attachment Kit has installed and the necessary services are running:
# xiv_fc_admin –V

   udev multipath rules...              OK
   multipath.conf...                    OK
   multipathd service...                OK

Check your multipath configuration is correct (you’re looking for a whole bunch of dm devices each with sdx devices underneath them, one sdx device for each path).  I show an example of the output below, showing one XIV disk with 3 paths:  sdf, sdn and sdv).
# multipath –l

 mpath2 (20017380011ae2519) dm-4 IBM,2810XIV
  [size=80G][features=1 queue_if_no_path][hwhandler=0][rw]
  \_ round-robin 0 [prio=0][active]
  \_ 3:0:2:22 sdf  8:80   [active][undef]
  \_ 3:0:3:22 sdn  8:208  [active][undef]  
  \_ 3:0:4:22 sdv  65:80  [active][undef]

Check the output of the xiv_devlist command.   You now see how useful this command is:
# xiv_devlist

 XIV Devices
 ---------------------------------------------------------------------------
 Device              Size    Paths  Vol Name     Vol Id   XIV Id   XIV Host
 ---------------------------------------------------------------------------
 /dev/mapper/mpath2  85.9GB  3/3    oracle06_22  9497     7801234  Oracle06
 ---------------------------------------------------------------------------

Pvscan the luns to detect the volume group descriptor on each disk.  This will allow Linux to detect your volume groups.  You will see the relevant VG for each DM device:
# pvscan
PV /dev/dm-4     is in exported VG Ora-Data [80.00 GB / 0    free]

Import the volume groups:
# vgimport –a
Volume group “Ora-Data” successfully imported

Activate the volume groups:
# vgchange –ay
1 logical volume(s) in volume group “Ora-Data” now active
Edit the /etc/fstab file to remove comments against each of the application filesystems.

Once you have edited the /etc/fstab file, you can mount the file systems:
# mount -a

Now handover the system to the application administrator.
They need to restart their applications and test them.
Ensure they set their application to autostart (if this is what they want).
I recommend that after testing the application they perform a final reboot to test their server.
If they are going to do a reboot, ensure you bypass fsck by using this shutdown command.    Otherwise if the system has been up for a very long time, you may be waiting a long time for automatic fsck to complete:
# shutdown -rf now

Your migration is complete!

Advertisements

About Anthony Vandewerdt

I am an IT Professional who lives and works in Melbourne Australia. This blog is totally my own work. It does not represent the views of any corporation. Constructive and useful comments are very very welcome.
This entry was posted in IBM XIV and tagged , , , . Bookmark the permalink.

15 Responses to Migrating a RHEL v5 host to XIV

  1. Pingback: Migrating a RHEL v5 host to XIV | Storage CH Blog

  2. madunix says:

    keep up the good work … xiv r0cks

  3. Bernard Goh says:

    if i am just migrating to new SAN switch for one of my RHEL server, do I need to backup anything?

  4. Kirzon says:

    Great post.
    However i was thinking to migrate from internal disks or boot from NetApp SAN to XIV.
    What is the best doing so for the WHOLE LUNS

    • avandewerdt says:

      Great question. Migrating from internal disk to boot from SAN needs some conversion work that a hardware migration tool like the one in XIV cannot do. I have done these conversions using Softek: http://www-935.ibm.com/services/us/en/it-services/softek-tdmf-ip-for-windows.html

      • Erez Kirson says:

        Thanks for the fast reply.
        I was thinking doing the following
        Two options:

        1) Use DD:
        Create a LUN within XIV – use dd to migrate to the new LUN – will move the whole volume.
        Configure the HBA to boot from XIV then reboot in rescue mode , create a new initrd with the needed module.
        Only then install the XIV HAK.

        2)Use pvmove.
        Add the XIV LUN to the LVM, ( pvcreate /dev/xiv + vgextend /dev/xiv VolGroup00 )
        Move data to the new disk ( pvmove /dev/sda /dev/xiv )
        Delete the old disk ( vgreduce /dev/sda )
        Note: Migrate the /boot to the new XIV.

        Worked in vmware :)
        -Erez

      • avandewerdt says:

        I have seen someone use the first method. You want to single path the host until you have it booting cleaning off the SAN.
        My Softek suggestion was hare-brained… your Linux not Windows.

      • Erez Kirson says:

        avandewerdt:
        Here is my input from todays work – i think it could be helpful to many people migrating boot from SAN on storage X migrating to XIV ( again all boot from SAN ).

        1) Linux configuration was boot from NetApp LUN in multipath ( /dev/mapper/mpath0 )
        Step I
        We mapped XIV Luns to the Linux machine – 10 Path for each controller 0:0:0:x / 1:0:0:x

        Step II
        We installed XIV HAK 1.6 and issued the xiv_attach script to generate a working /etc/multipath.conf
        We verified multipath -ll include XIV and NetApp

        Step III
        Configure multipath device in config file.
        # vi /var/lib/multipath/bindings

        # Format:
        # alias wwid
        #
        # Mpath0 was NetApp – changed mpath0 to XIV
        #mpath0 3600508b400070aac0000900000080000
        #mpath1 323342080000
        mpath0 323342080000

        ————–
        Note : We changed mpath0 to XIV LUN

        Step IV
        create a new initrd:
        mkinitrd /boot/initrd-xiv.img `uname -r`
        Edit grub and point to a new initrd

        Step V
        we used “dd” to migrate NetApp LUN to XIV LUN
        #dd if=/dev/mapper/mpath0 of=/dev/mapper/mpath1 bs=8M

        THATS IT !
        Note: Disable NetApp LUN Zoning + reconfigure the HBA to boot from XIV LUN.
        Boot and enjoy !!!!

      • Awesome! Thanks for sharing.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s