Capacity & Performance Tools, Featured, Storage Tools, Virtualization Software

Determining Storage Performance Needs is Part Science and Part Art

This morning, I had the pleasure of listening to Greg Goelz, CEO of CloudByte, as he discussed the products provided by his company with the delegates from Gestalt IT’s Storage Field Day 4.  As is generally pretty well known, storage has been one of the more difficult aspects of the data center to tame, a problem that has been exacerbated by virtualization and the inclusion of workloads running with all kinds of different I/O patterns.

During his presentation, Mr. Goelz made a statement – in the form of a question – that I see, well, all the time when I’m in the field consulting.   Go ask any CIO or even any storage administrator this simple question: How much storage performance do you need (IOPS)?

In most cases, the answer will be the same: I don’t know.

Moreover, there’s a dirty little secret in the storage industry.  If you ask any vendor how many IOPS can be delivered by their devices in real world situations, there’s a good chance that you’ll get the same answer.  Although many vendors will publish IOPS numbers – which I believe is a good practice, as long as the playing field is equal – those numbers are just a snapshot of a single array at a single point in time with a single kind of workload.  In the real world, who knows what will hit that array?  It’s impossible to test for every possible use case.

The problem with this answer is that it all but forces customers to overbuy performance, which reduces ROI and increases overall cost.

As such, vendors interested in providing their customers with a realistic view of performance capacity need to constantly watch their storage services and see what’s happening.  CloudByte does do this, but with the understanding that different kinds of disks will have much different characteristics.  For example, for traditional SAS and SATA rotating hard disks, their physics are pretty well known, so determining maximum performance capabilities is relatively exacting.

When it comes to SSDs, however, the situation is different.  Each SSD carries with it radically different IOPS potential that can only really be measured once the disk is in the system and being used with workloads.  CloudByte takes the approach that they make a best effort to gauge performance capability, but they do so in a somewhat conservative way; the goal is to make sure that the customer doesn’t hit a surprise performance wall, which would have an immediate and negative impact on the customer.

During the session, I asked Mr. Goelz this question: “Is it safe to say that getting performance potential is mostly science with some art and that, over time, it becomes more scientific based on observation?”  He indicated that this was exactly their approach.

CloudByte keeps vigil over the storage pool and can constantly report current IOPS load and, on a best effort basis, IOPS remaining.  This is a really important question to customers as they continue to try to add workloads to their environment.  With information at their fingertips, customers can have a reasonable level of certainty that they can move forward with a deployment.

However, I’m still a big believer in general monitoring by a third party as an essential element in the data center.  With a combined approach, customers gain a better chance of making sure that they’re able to maximize the return on their storage investment and that they won’t have to worry too much about a gray area of performance uncertainty.  At the same time, unlike is the case with capacity, there is always going to at least some uncertainty due to constantly changing requirement.