Breaking through the 2TB barrier

There was a time when 32 bits was considered a lot.   A hell of a lot.
With 32 bits, you can create a hexadecimal number as big as 0xFFFFFFFE (presuming we reserve one bit).
In decimal that’s 4,294,967,295.   Hey… imagine a bank account balance that big?
If you use 32 bits to count out 512 byte sectors on a disk, you could have a disk that’s 4,294,967,295 times 512… or 2,199,023,255,040 bytes!   That’s sounds huge, right?

Well… actually…no… that’s 2 TiB, which most people would refer to as 2 Terabytes.    Mmm.. Suddenly I am less impressed (still wouldn’t mind that as a bank account though).

Now there are plenty of running Systems that still cannot work with a disk that is larger than 2 TiB.  One of the more common is ESX.  I am presuming this limitation is going to disappear, so Storage susbsystems need to be ready to create volumes that are larger than 2 TiB.

The good news is that with the May 2011 announcements, IBM is removing the last 2 TiB  sizing limitations from its current storage products.   There appears to have been some confusion in the past, so I thought I would go through and be clear where each product is at:


Firmware version added support to create volumes larger than 2 TB.  The maximum volume size is limited only by the size of the largest array you can create.  This capability has been available for some time and hopefully you are already on a much higher release.

DS4000 and DS5000

Firmware version added support to create volumes larger than 2 TB.  The maximum volume size is limited only by the size of the largest array you can create.  This capability has been available for some time and hopefully you are already on a much higher release.

DS8700 and DS8800

The DS8700 and DS8800 will support the creation of volumes larger than 2 TB once a code release in the 6.1 family has been installed.    With this release you will be able to create a volume up to 16 TiB in size.  The announcement letter for this capability is here.


The volume size on an XIV is limited only by the soft limit of the pool you are creating the volume in.   This allows the possibility of a 161 TB volume, although due to the way to the pools are sized, this volume would be over-provisioned by a factor of two to one.

SVC and Storwize V7000

These two products have two separate concepts:

  1. Volumes (or VDisks) that hosts can see.
  2. Managed disk (or MDisks) that are presented by external storage devices to be virtualized.  Within this there are two further categories:
    –  Internal MDisks created using the Storwize V7000 SAS disks.
    –  External MDisks created by mapping volumes from external storage (such as from a DS4800).

SVC and Storwize V7000 Volumes (VDisks).

Prior to release 5.1 of the SVC firmware, the largest volume or VDisk that you could create using an SVC was 2 TiB in size.   With the 5.1 release this was raised to 256 TiB, as announced here.  When the Storwize V7000 was announced (with the 6.1 release) it also inherited the ability to create 256 TiB volumes.

Storwize V7000 Internal Managed Disks (Array MDisks).

Because the Storwize V7000 has its own internal disks, it can create RAID arrays.   Each RAID array becomes one Mdisk.  This means the largest MDisk we can create is limited only by the size of the largest disk (currently 2 TB), times the size of the largest array (16 disks).  This means we can make arrays of over 18 TiB in size (using a 12 disk RAID6 array with 2 TB disks).   Thus internally the Storwize V7000 supports giant MDisks.  We can also present these giant MDisks to an SVC running 6.1 code and the SVC will be able to work with them.

SVC and Storwize V7000 External Managed Disks.

When presenting a volume to the SVC or Storwize V7000 to be virtualized into a pool (a managed disk group) we need to ensure two things are confirmed.  Firstly you need to be on firmware version 6.2 as confirmed here for SVC and here for Storwize V7000.   Secondly that the controller presenting the volume has to be approved to present a volume greater than 2 TiB.    From an architectural point of view, MDisks can be up to 1 PB in size as confirmed here, where it says:

Capacity for an individual external managed disk
1 PB
Note: External managed disks larger than 2 TB are only supported for certain types of storage systems. Refer to the supported hardware matrix for further details.

I recommend you go to the supported hardware matrix and confirm if your controller is approved.  The links for Storwize V7000 6.2 are here and for SVC  here.   As of this writing, the list has still not been updated, but I am reliably informed it will include the DS3000, DS4000, DS5000, DS8700 and DS8800.   It will not initially include XIV, which will come later.   Please also note the following:

  • Support for giant MDisks (greater than 2 TiB) is firmware controlled.  If the controller (e.g. a DS5300) presenting a giant MDisk is not on the supported list for your SVC/Storwize V7000 firmware version, then only the first 2 TiB of that MDisk will be used.
  • If your already presenting a giant MDisk (and using just the first 2 TiB), then just upgrading your SVC/Storwize V7000 firmware won’t make the extra space useable.  You will need to remove the MDisk from the pool, then do an MDisk discovery and then add the MDisk back to the pool.  All of this can of course be done without disruption, using the basic data migration features we have supported since 2003.

What to do in the meantime?

If your currently using an SVC or external MDisks with a Storwize V7000, then you need to work within the 2 TiB MDisk limit (except for Storwize V7000 behind SVC).     The recommendation is a single volume per Array for performance reasons (so the disk heads don’t have to keep jumping all over the disk to support consecutive extents on different parts of the disk). This can require careful planning.  For instance using 7+P RAID5 Arrays of 450 GB drives makes an array that is over 3 TB.   What to do in this example?

  • Divide it in half? (by creating two 1.5TB volumes)
  • Waste space? (a whole 1 TB)
  • Use smaller arrays? (a 4+P array of 450GB disks is 1.8 TB)

The answer is that where possible, create single volume arrays using 4+P or larger.   If the disk size precludes that, then create multiple volumes per array and preferably split these volumes across different pools (MDisk groups).

Anything else to consider?

Well first up, will your Operating System support giant volumes?   Googling produces so much old material that it becomes hard to nail down exact limits.   For Microsoft, read this article here.  For AIX check out this link.   For ESX, check out this link.

Second of course is the consideration of size.  File systems that utilize the space of giant volumes could potentially lead to giant timing issues.   How long will it take to backup, defragment, index or restore a giant file system based on a giant volume (the restore part in particular)?   Outside the scientific, video or geo-physics departments, are giant volumes becoming popular?   Are they being held back by practical realities or plain fear?   Would love to hear your experiences in the real world.

And a big thank you to Dennis Skinner, Chris Canto and Alexis Giral for their help with this post.


About Anthony Vandewerdt

I am an IT Professional who lives and works in Melbourne Australia. This blog is totally my own work. It does not represent the views of any corporation. Constructive and useful comments are very very welcome.
This entry was posted in DS8800, IBM XIV, Storwize V7000, SVC and tagged , , , , , , , , , . Bookmark the permalink.

7 Responses to Breaking through the 2TB barrier

  1. Pingback: Breaking the 2TB barrier | Storage CH Blog

  2. fagiano christophe says:

    Wellon XiV I have a rmrk, your create a pool (soft size) of 161 TB, but the hard size o a pool is limited to 79 TB, so when your server reach 79 TB it will be read-only or no IO… what is the utilityto give 161TB then ?

  3. fagiano christophe says:

    :-) thanks for your prompt answers, I knew this and this s still part of the XiV specs, I am still trying to change this as this is useless to me.
    Good luck for tomorrow ;-)

  4. David says:

    very good summary, thanks!

  5. nznaleen says:

    very good details

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s