New SVC Storage Performance Council SPC-1 benchmark breaks 500,000 IOPS barrier

The IBM SVC has has been setting records in SPC-1 (OLTP-like) benchmarks for many years.   However recently HP stole the crown with a 3Par benchmark of 450,212.66 IOPS.

But in breaking news, the SVC is back on top with the very first SPC benchmark that exceeded 500,000 IOPS (520,043.99 to be precise!).   You can see the executive summary here.

This benchmark used eight of the current SVC engines (model CG8s) with Storwize V7000 as the backend disk.   It shows the awesome power of SVC, its ability to scale and to handle very large configurations with very large throughput requirements.     It also shows the power of IBM pSeries which was used to drive these IOPS.

The full disclosure report is here.


About Anthony Vandewerdt

I am an IT Professional who lives and works in Melbourne Australia. This blog is totally my own work. It does not represent the views of any corporation. Constructive and useful comments are very very welcome.
This entry was posted in IBM Storage, SAN, Storwize V7000, SVC and tagged , , , , , , , , , , . Bookmark the permalink.

9 Responses to New SVC Storage Performance Council SPC-1 benchmark breaks 500,000 IOPS barrier

  1. for half the price and almost twice the capacity couldnt you get a gen3 XIV? I’ll be curious to see if IBM submits the Gen3 to SPC-1 once the new read-cache SSD’s have been implemented. Gen3 currently will do 500k IOPS without it.

  2. Pingback: Sunny Skies for SVC, Storwize « benchmarkingblog

  3. Paul Haverfield says:

    Hi Anthony,

    I work for HP, in the role of Principal Storage Technologist for APJ; can you help me understand some aspects of this please?
    First, I am trying to understand exactly what role and value the 8-node SVC cluster played in the config. According to the Full Disclosure Report (p62/63), all 192 back-end Vdisks from the 16 x V7000 arrays were “passed through”, “as-is”, the SVC cluster. There does not appear to be any value-add from the SVC cluster to the V7000 Vdisks. Am I mis-reading the FDR appendix C?

    Then, at the AIX hosts, the 192 “Host System Volumes” presented from the SVC cluster are striped using LVM into volume groups and carved into 142 LUNs for use by the SPC-1 workload engine.

    Maybe I’m wrong and am missing something not written up in the FDR, but i cannot see any value contributed by SVC in this SPC-1 config. If my reading is correct, then how do you substantiate your statement “…awesome power of SVC, its ability to scale and to handle very large configurations with very large throughput requirements” ? If (SVC) has such awesome power – why would it not perform the basic striping functions at the SVC layer – rather than at the more complex host layer within AIX ?


    Paul Haverfield [HP Storage]

    • Hi Paul.

      Clearly with these truly fantastic results, you have to forgive me for getting excited.

      IBM took 16 midrange storage controllers and aggregated them under one virtualization layer and then drove them to 270.85 IOPS per disk. That’s a pretty cool IOPS per disk number. Overall what it shows is that the SVC can harness a huge number of disks and still not be saturated.
      So the client gets an industry acclaimed best of breed, easiest to use user interface, all the virtualization, non-disruptive data migration, thin provisioning, advanced copy services and more. The ability to move additional hardware into or out of the environment is a huge plus. Every SVC customer I work with says the same thing: it is the flexibility to seamlessly move workload that they truly love (you can even put an EVA or a 3PAR behind one!).

      As for the LVM striping vs hardware striping, it is an interesting debate. There are pro and cons (in the classic IT answer: it depends). Almost universally clients get a aggregate higher level of IOPS by using SVC striping (because we spread the workload more evenly over spindles that were previously in islands). If you conclude that the SPC-1 test proves that AIX LVM can achieve the same thing, well in this highly homogeneous case, that would be true. But instead of dealing with one SVC layer, the client would be managing disks from 16 separate sources, and be minus all the advantages I already listed.

      Of course you could reply that by buying one 3PAR they would also a single point of management that peaks at over 400,000 IOPS (which is also impressive). Hey the 3PAR SPC price per IOPS is cheaper, so you could also argue on that point (although street price, list price and SPC price can vary wildly as I am sure you know). So I actually think your point about striping is probably not the right place to nitpick.

      I hope this goes someway to answering your questions and sorry for the delaying in responding.

      • Paul Haverfield says:

        Hi Anthony,

        thanks for the response. Curious what your thoughts are on the utilisation and cost aspects of this result. Looking at the total ASU (application storage unit – the capacity of the storage provided to the SPC-1 workload engine) price per GB we have $36.88/GB for the new SVC 8-node ($3598956 / 97581GB) config; and $12.87/GB for the 3PAR P10000 result you referenced ($2965892 / 230400GB) – the SVC result is heaps more expensive (> 2.5x or 187%) – relative to the 16% performance gain received. In a cost conscious climate – how “real-world” is the configuration used in this submission ?

        best regards,

        Paul Haverfield (HP)

      • Hi Paul.

        On the one hand I cannot tell you to ignore what is clearly printed in the official SPC document.
        The prices listed are the prices listed.
        But I question the real world nature of the selected discount percentages.
        If we are really going to be serious about using just the $ numbers to come up with ‘real-world’ decisions, then frankly we need real world prices.

        The 3PAR discount in the SPC Exec summary is 50% on all the big ticket items.
        The IBM discount is 39% on the big ticket items. On the FC switches it is only 20%.

        Are those real world discounts?
        In a perfect world we would have street prices and then we could have a real discussion.

  4. The joys of SVC. And not only does it make things go faster, it virtualises too… Speed is just a side effect…

    • Good point.
      I have received some feedback asking exactly what the point of such a test is.
      IBMs purpose in running the SPC-1 benchmark was not to demonstrate every storage management feature of the box, but to show the performance potential that a system of this kind brings to the table.
      It certainly achieved that!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s