Lingo Explained: CPU-bound vs. Disk-bound storage arrays

Sometimes adding spindles is not enough to resolve a storage performance problem. Today – with high performing SSD disks – it’s getting more and more important to build a good understanding on the architecture of a storage arrays. Eventually this will help you to determine the caveats and will make you pose the good questions to get the maximum performance out of your IT infrastructure.

Some time ago, I got confronted with a storage performance problem. The disk array installed in an appliance was able to move 600GB/hour in 60% write and 40% read. When contacting the vendor, they advised me to install more spindles as this will distribute the load and I would gain performance. Upon validation of the system – storage array controller – specifications, it became clear this would not resolve the situation as the storage controller was already at maximum performance! This is a perfect example of a CPU-bounded system.

The goal of this blogpost is to define the difference between CPU-bound and Disk-bound.

CPU-bound system:
CPU-bound storage systems – as in my situation – are able to store more data, but in the end the system itself does not benefit from the additional spindles or the fast(er) class disks. Adding disks are only suitable for a higher storage capacity and does not achieve a higher performance.

Systems which have been refreshed for several years are mostly confronted with this limitation. Therefore it’s important to ask for some performance figures before buying new equipment.

Disk-bound system:
In a disk bound system, the system is limited to the performance of the disks. A higher performance can be achieved by replacing the disks by a faster type of adding more spindles!

A caveat..
Today’s storage arrays are redundantly configured with 2 or more controllers which can beĀ in an active / active configuration. Business operations are running smoothly as both controllers can be addressed for write – and read operations. I notice customers are not paying attention to the performance degradation which occurs after controller fail-over. At this moment, the active system needs to be able to address all operations by itself. Therefore it’s advised to scale your storage infrastructure on a per-controller basis.

To be honest… the performance degradation is not easily calculated as some of the internal processes can get disabled, such as heartbeat and/or cache mirroring. Additionally, you should ask what the controller caching mechanism is when one of the controllers fails. In some -older- systems, the system goes into a write-through – (write confirmation once written to physical disk), instead of a write-back (write confirmation once received by partner controllers cache) mechanism hitting the performance once more.

Of course today you can find more – intelligent – mechanisms to optimize your storage array. I’m thinking about host-based caching solutions, such as Fusion IO.

Thanks for reading!

Ruben

Leave a Reply