We’ve covered software-defined storage (SDS) in the past on the blog, delving into how it can automate many of your storage administration tasks. Today we’ll get a bit deeper into how SDS improves storage capacity management by maximizing the performance of the storage attached to each virtual machine according to pre-set rules.
In vSphere, storage management involves a combination of performance and service levels and capacity planning. SDS controls in the VMware ecosystem are called Storage Based Policy Management (SPDM) and with their use, you no longer have to provision virtual machines individually according to their storage requirements.
Here’s how SPBM eliminates the need to overprovision and manually manage storage arrays.
Storage tiers are often used to differentiate different arrays based on service levels and performance. In this case, low-performance storage is used for long-term storage and archive, or other applications that do not require high IOPS and fast retrieval. The higher tiers are reserved for financial applications, high performance databases, VDI, modeling, and so forth.
In the traditional storage provisioning model, each virtual machine is assigned to one of these tiers. The tiers are set — the hardware is ordered and deployed with their intended use in mind. Each application is bound to a specific array.
A more flexible model emerged from the traditional storage provisioning model, taking advantage of more flexible storage array technology that can accommodate multiple service levels on each array. In other words, the previous Tier 1 and Tier 3 storage could coexist on the same piece of storage hardware. The Tiers are then configured according to data protection and performance/IOPS.
This Storage Pool model is much more flexible than traditional provisioning, as the virtual machine can be manually reassigned to a static service level at almost any time, by moving the data store to the required storage pool.
Even with the additional flexibility of Storage Pools, both of these models require careful forecasting (otherwise known as a lot of guesswork). Administrators must know in advance to the best of their ability how many applications might use a given storage tier and be prepared for future growth and change.
The end result often ends up being too many high performance arrays and tiers deployed, costing significant upfront capital and operational expense. The less critical applications are placed in a higher storage tier, mostly to avoid the few higher performance apps from facing potential performance issues or downtime.
The VMware hypervisor can now dynamically assign storage resources based on VM requirements for performance, capacity, and data protection. Instead of using a “bottom-up” approach, where storage arrays are configured for the storage policies, the SDS model uses a “top-down” approach, with the hypervisor setting the storage requirements. Storage policies are configured within vSphere, and each VM is assigned to a policy. The policy engine matches the precise service level and performance level required.
This is performed by abstracting a combination of storage within X86 servers, storage within NAS or SAN arrays, and object storage arrays into virtual data stores that are segmented by protection, data mobility, and performance.
Storage policies can also include data services like snapshots and replications, so if your critical applications require more frequent backups, they will automatically be configured as such when you spin up the VM.
For more information on setting up and configuring your Virtual Machine Storage Policies, visit this VMware help article.