Exact MSP Space Accounting on a Storwize Pool

I have blogged in the past about the classic IT Story, The Cuckoo’s Egg by Clifford Stoll.   A true story that details how Clifford discovered a hacker while trying to account for 9 seconds of mainframe processing time.

I was reminded of this recently while doing an MSP Space Accounting project.  MSPs (Managed Service Providers) are understandably cost focused as they try to compete with low-cost IAAS (Infrastructure As A Service) providers like Amazon.   To control costs, shared resources are normally employed as well as thin-provisioning and its cousin over-provisioning (don’t confuse them,  thin-provisioning just means using only the exact resources needed for an objective, where over-provisioning means promising or committing to more resources than you actually have, in the hope that no one calls your bluff.   You can always use thin-provisioning without using over-provisioning).

A Storwize pool can use both thin and over-provisioning.   As an MSP if you are looking at pool usage you may want to be clear exactly how much space each client in the shared pool is using.   Now I don’t want to burn time explaining the exact workings of thin provisioning (something that Andrew Martin explains very well here), but I wanted to point out a quirk that may confuse you while trying to do space accounting.

In this example I have a Storwize pool that is 32.55 TiB in size and is showing 22.93 TiB Used.  You can clearly see we have over-allocated the 32.55 TiB of disk space by having created 75.50 TiB of virtual volumes!

2016-05-01_13-37-16

Now this is significant because if I wanted to do space accounting I would expect the Used capacity of all volumes in the pool to sum up  22.93 TiB of space.  In other words if five end clients are sharing this space and I know which volumes relate to which client, I would expect the sum total of all volumes used by all clients to equal 22.93 TiB.

If I bring up the properties panel for the pool I can clearly see metrics for the pool including the extent size (in this example 2.00 GiB, remember that, it is significant later).

2016-05-01_13-37-36

Now for each thin provisioned volume I get three size properties:

Used: 768.00 KiB   
Real: 1.02 GiB   
Total: 100.00 GiB  

To explain what these are:

  • Used capacity is effectively how much data has been written to the volume (which includes the B-Tree to track thin space allocation).
  • Real capacity is how much space in grains has been pre-allocated to the volume from extents allocated from the pool.
  • Total capacity is the size advertised to the hosts that can access this volume.

This means I could sum either Used capacity or Real capacity.   Since Real capacity is always larger than Used capacity, it makes more sense to sum this.  Especially if this is the number I am using to determine usage by clients inside a shared pool.

To get the used space size of all volumes we need to differentiate between fully provisioned (Generic) volumes and Thin-Provisioned volumes.

This command will grab all the Generic volumes in a specific pool (in this example called InternalPool1):

lsvdisk -bytes -delim ,  -filtervalue se_copy_count=0:mdisk_grp_name=InternalPool1

This command will grab all the thin volumes in a specific pool (in this example called InternalPool1):

lssevdiskcopy -bytes -delim , -nohdr -filtervalue mdisk_grp_name=InternalPool1

Add the -nohdr option if you wish to use these in a script.

So for the generic volumes we can sum the capacity field.   In this example pool, I used a spreadsheet and found it sums to 19,404,662,243,328 byes

So for the thin volumes we can sum the real capacity field.   In this example pool,  I used a spreadsheet and found it sums to 5,260,831,053,824 bytes.

This brings us to a combined total of 24,665,493,297,152 bytes which is 22.43 TiB.

The problem here is obvious.   I expected to account for 22.93 TiB of space, but summing the combined total of actual capacity for full-fat volumes and real-capacity for thin volumes doesn’t add up to what I expect.  In fact in this example I am short by around 0.5 TiB of used capacity.  How do I allocate this space to a specific client if no volume owns up to using it?

I can actually spot this in the CLI as well using just the lsmdiskgrp command.  If I subtract real capacity 24,665,493,297,152 from total capacity 35,787,814,993,920 I get 11,122,321,696,768 bytes, which is nowhere near reported free capacity of  10,578,504,450,048 bytes.  This again reveals 543,817,246,720 bytes (0.494 TiB) of allocated space that is not showing against volumes.

IBM_Storwize:Actifio1:anthonyv>lsmdiskgrp -bytes 0
 id 0
 name InternalPool1
 status online
 mdisk_count 1
 vdisk_count 525
 capacity 35787814993920
 extent_size 2048
 free_capacity 10578504450048
 virtual_capacity 83010980413440
 used_capacity 23916077907968
 real_capacity 24665493297152

The answer is that the space is actually allocated to volumes, but is not being accounted for at a volume level.   If you scroll up to the second screen shot showing the Pool overview you can see the Extent Size is 2 GiB.   That means the minimum amount of space that gets  allocated to a volume is actually 2 GiB.  But if we look at the volume properties of a single volume, there is no indication that this volume is actually holding down 2 GiB of pool space.     In this example I can see only 1.02 GiB of space being claimed.  So for this example volume there is actually 0.98 GiB of space allocated to the volume which is not actually being acknowledged as being dedicated to that volume.

2016-05-01_14-36-23

So how do I cleanly allocate this 0.5 TiB?

I see two choices.   The first is to simply determine the shortfall, divide it by the number of thin allocated volumes and then add that usage to each thin volume.     In this example I have 519 thin volumes, so if I divide  543,817,246,720 by 519 thats pretty well 1 GiB per volume I could simply add to that volume’s space allocation.

The second is to accept it as a space tax and simply plan for it.   The issue is far less pronounced if the volume quantity is small and the volume size is large.  The issue is also far less pronounced with smaller extent sizes.   At very small extent sizes it in fact will most likely not occur at all or be truly trivial in size (like Clifford’s 9 seconds). In this example simply using 1 GiB extents would have pretty well masked the issue.    But remember that the smaller your extent size, the smaller your maximum cluster size can be.  A 2 GiB extent size means the maximum cluster size is 8 PiB.

 

 

Advertisements

About Anthony Vandewerdt

I am an IT Professional who lives and works in Melbourne Australia. This blog is totally my own work. It does not represent the views of any corporation. Constructive and useful comments are very very welcome.
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s